<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Modernization Best Practices and Reusable Assets Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/bg-p/ModernizationBestPracticesBlog</link>
    <description>Modernization Best Practices and Reusable Assets Blog articles</description>
    <pubDate>Fri, 01 May 2026 11:26:30 GMT</pubDate>
    <dc:creator>ModernizationBestPracticesBlog</dc:creator>
    <dc:date>2026-05-01T11:26:30Z</dc:date>
    <item>
      <title>Redefining Database Maintenance after Migrating from Db2 on Mainframe to Azure SQL DB Hyperscale</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/redefining-database-maintenance-after-migrating-from-db2-on/ba-p/4471207</link>
      <description>&lt;H3&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;Migrating from Db2 z/OS to Azure SQL Database Hyperscale is a major step toward modernizing your mainframe relational data infrastructure to Azure Data SQL managed relational database offering. But what happens to all those daily and periodic Db2 database maintenance tasks you used to perform on Mainframe?&lt;/P&gt;
&lt;P&gt;In this post, to make migration easier we have provided the recommended mapping between Db2 zOS maintenance and Azure SQL DB Hyperscale maintenance tasks that shows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Tasks you must &lt;EM&gt;still&lt;/EM&gt; &lt;EM&gt;schedule&lt;/EM&gt; or trigger yourself&lt;/LI&gt;
&lt;LI&gt;Tasks that are only needed after &lt;EM&gt;specific&lt;/EM&gt; events (like large data loads)&lt;/LI&gt;
&lt;LI&gt;Tasks that are now fully handled by &lt;EM&gt;Azure SQL Database PaaS&lt;/EM&gt; or are simply &lt;EM&gt;not applicable&lt;/EM&gt; anymore&lt;/LI&gt;
&lt;LI&gt;Best practices that, while not mandatory, are strongly &lt;EM&gt;recommended&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;Mainframe Db2 vs Azure SQL Database Hyperscale Database Maintenance&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;For each maintenance activity, you will find actionable guidance on how to perform it in Azure SQL Database Hyperscale, helping you streamline operations and focus on what matters most.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-align-left lia-border-color-21 lia-border-style-solid" border="0.5" style="width: 99.2593%; height: 7119px; border-width: 0.5px;"&gt;&lt;thead&gt;&lt;tr style="height: 102px;"&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Db2 z/OS Task / Concept&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Purpose on Db2&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Azure SQL DB Hyperscale Equivalent&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Responsibility&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Recommended Frequency (Post‑Migration)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21" style="height: 102px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Remark&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 32px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 32px; border-width: 0.5px;"&gt;
&lt;PRE&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/PRE&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr class="lia-align-left" style="height: 399px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;Full Image COPY (Backup)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;Recoverability&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: Full image copy needs to be taken for data objects like Table space, index space etc.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;SPAN style="background-color: rgba(0, 0, 0, 0); color: rgb(30, 30, 30);"&gt;Azure SQL DB Hyperscale utilizes&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="background-color: rgba(0, 0, 0, 0); color: rgb(30, 30, 30);"&gt;storage snapshot technology&lt;/STRONG&gt;&lt;SPAN style="background-color: rgba(0, 0, 0, 0); color: rgb(30, 30, 30);"&gt; to capture a full, complete copy of the database's data files. The &lt;/SPAN&gt;&lt;STRONG style="background-color: rgba(0, 0, 0, 0); color: rgb(30, 30, 30);"&gt;transaction logs&lt;/STRONG&gt;&lt;SPAN style="background-color: rgba(0, 0, 0, 0); color: rgb(30, 30, 30);"&gt; generated since the last snapshot are kept unchanged ("as is") for the set retention period to ensure point-in-time recovery.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/hyperscale-automated-backups-overview?view=azuresql" target="_blank" rel="noopener"&gt;Continuous&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 399px; border-width: 0.5px;"&gt;
&lt;P&gt;In Azure SQL DB HS default short term retention period of database backup is 7 days which can be configured till 35 days. You can configure retain backup for up to 10 years by configuring Long Term Retention Policy through Azure&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=portal" target="_blank" rel="noopener"&gt;Portal&lt;/A&gt; / Azure &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=azure-cli" target="_blank" rel="noopener"&gt;CLI&lt;/A&gt; / &lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/long-term-backup-retention-configure?view=azuresql&amp;amp;tabs=powershell" target="_blank" rel="noopener"&gt;PowerShell&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Incremental / Delta COPY&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Reduce backup window&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;No incremental backup in HS&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Log Archive / Dual Logging&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Point-in-time recovery&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Transaction log backups automatic&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Continuous&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 2037px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;RUNSTATS (Table/Index)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;Optimizer statistics&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;Auto Create/Auto Update Statistics&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For specific cases schedule STATS update&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform&lt;/P&gt;
&lt;P&gt;+ You&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;Continuous&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 2037px; border-width: 0.5px;"&gt;
&lt;P&gt;DBCC &lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-show-statistics-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;SHOW_STATISTICS&lt;/A&gt; command displays current query optimization statistics for a table or indexed view.&lt;/P&gt;
&lt;P&gt;The Query Optimizer determines when statistics might be out-of-date and then updates them when they're needed for a query plan when AUTO_UPDATE_STATISTICS ON or AUTO_UPDATE_STATISTICS_ASYNC is enabled.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql?view=sql-server-ver17#async_stats_update_wait_at_low_priority---on--off-" target="_blank" rel="noopener"&gt;ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY&lt;/A&gt; allows for updates&amp;nbsp;of statistics&amp;nbsp;asynchronously which can wait for the schema modification lock on a low priority queue. This improves concurrency for workloads with frequent query plan (re)compilations.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Asynchronous Auto Update Statistics&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Leave it OFF (default) for most OLTP/reporting hybrids unless you see blocking on STATMAN operations. Consider enabling async if:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;You have latency-sensitive OLTP queries that frequently block on synchronous stat refresh.&lt;/LI&gt;
&lt;LI&gt;Trade-off: First executions after threshold may use stale stats until async update finishes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;ALTER DATABASE CURRENT SET AUTO_UPDATE_STATISTICS_ASYNC ON;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Statistics update for partitioned table.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Option INCREMENTAL = { ON | OFF } &lt;/STRONG&gt;allows for creation of statistics per partition of the table.&lt;/P&gt;
&lt;P&gt;If only some partitions are changed recently use below statistics update command for specific partition stats update.&lt;/P&gt;
&lt;P&gt;UPDATE STATISTICS dbo.Fact PARTITION = n&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Updating Statistics manually:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;UPDATE STATISTICS schema.table WITH FULLSCAN / SAMPLE N ROWS / SAMPLE N PERCENT&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Update Stats after:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Bulk load (insert millions of rows) into existing large table&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Large data purge (delete/archival)&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Partition switch-in / switch-out&lt;/P&gt;
&lt;P&gt;·&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Parameter sniffing + big, estimated vs actual row mismatch &lt;EM&gt;repeatedly&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Identifying Which Stats Need Attention&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Query to rank stats by modification ratio:&lt;/P&gt;
&lt;P&gt;Pick those with (&lt;STRONG&gt;general guidance tune according to specific workload and performance expectations&lt;/STRONG&gt;):&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;modification_pct &amp;gt; 15% on large fact tables&lt;/LI&gt;
&lt;LI&gt;or absolute modification_counter very high (e.g., &amp;gt; 100K changes) even if pct smaller (extreme scale tables)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Simple Maintenance Script (DMV-Driven)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This updates only stats with &amp;gt;15% modification and &amp;gt;100K row changes:&lt;/P&gt;
&lt;table border="1" style="width: 100.052%; height: 473px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 473px;"&gt;&lt;td style="height: 473px;"&gt;
&lt;PRE class="language-sql line-numbers" contenteditable="false" data-lia-code-value="DECLARE @sql nvarchar(max) = N'';
WITH c AS (
  SELECT
    s.object_id, s.stats_id,
    QUOTENAME(OBJECT_SCHEMA_NAME(s.object_id)) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) AS full_table_name,
    QUOTENAME(s.name) AS stat_name,
    sp.rows, sp.modification_counter,
    1.0 * sp.modification_counter / NULLIF(sp.rows,0) AS mod_ratio
  FROM sys.stats s
  CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
  WHERE sp.modification_counter IS NOT NULL
    AND OBJECTPROPERTY(s.object_id,'IsMsShipped') = 0
    AND sp.rows &amp;gt;= 100000
    AND sp.modification_counter &amp;gt;= 100000
    AND (1.0 * sp.modification_counter / NULLIF(sp.rows,0)) &amp;gt;= 0.15
)
SELECT @sql = STRING_AGG(
   N'UPDATE STATISTICS ' + full_table_name + N' ' + stat_name + N' WITH SAMPLE 30 PERCENT;',' ')
FROM c;
PRINT @sql;  -- Review first
EXEC sp_executesql @sql;"&gt;&lt;CODE&gt;DECLARE @sql nvarchar(max) = N'';
WITH c AS (
  SELECT
    s.object_id, s.stats_id,
    QUOTENAME(OBJECT_SCHEMA_NAME(s.object_id)) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) AS full_table_name,
    QUOTENAME(s.name) AS stat_name,
    sp.rows, sp.modification_counter,
    1.0 * sp.modification_counter / NULLIF(sp.rows,0) AS mod_ratio
  FROM sys.stats s
  CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
  WHERE sp.modification_counter IS NOT NULL
    AND OBJECTPROPERTY(s.object_id,'IsMsShipped') = 0
    AND sp.rows &amp;gt;= 100000
    AND sp.modification_counter &amp;gt;= 100000
    AND (1.0 * sp.modification_counter / NULLIF(sp.rows,0)) &amp;gt;= 0.15
)
SELECT @sql = STRING_AGG(
   N'UPDATE STATISTICS ' + full_table_name + N' ' + stat_name + N' WITH SAMPLE 30 PERCENT;',' ')
FROM c;
PRINT @sql;  -- Review first
EXEC sp_executesql @sql;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;
&lt;P&gt;&lt;STRONG&gt;Decision Matrix&lt;/STRONG&gt;&lt;/P&gt;
&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Question&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Answer&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Should I keep AUTO_CREATE/UPDATE ON?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Often Yes. For big tables with millions of records with sizes 100 GB+ if auto create / update / asynchronous stats update is having&amp;nbsp;performance impact, you can disable this and schedule daily / weekly process to perform STATAS UPDATE.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Do I need a nightly job?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Often no; maybe lightweight sp_updatestats for very volatile&lt;/P&gt;
&lt;P&gt;workloads.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;After heavy ETL?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes, targeted stats refresh (especially dimensions/facts touched).&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;After index rebuild?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;No&lt;/STRONG&gt; extra stats on that index; other column stats unaffected may still&lt;/P&gt;
&lt;P&gt;need update.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Use FULLSCAN often?&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Only for a handful of skewed, performance-critical tables with proven&lt;/P&gt;
&lt;P&gt;cardinality issues.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 273px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;REORG TABLE (Tablespace)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;Eliminate overflow / reclaim space&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;- Azure SQL does not have REORG TABLE command&lt;/P&gt;
&lt;P&gt;- Similar result can achieved by using REBUILD of the clustered index&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;Not usually required. Should be done if there is large (&amp;gt;30 %) index defragmentation&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;: 30% is just guidance.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 273px; border-width: 0.5px;"&gt;
&lt;P&gt;Db2 has table level REORG Option, Azure SQL DB HS has index level REORG options.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 551px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;REORG INDEX / REBUILD INDEX&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;Defragment indexes&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;REBUILD / REORGANIZE index&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Needed / Conditional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;Weekly or when fragmentation &amp;gt; thresholds&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 551px; border-width: 0.5px;"&gt;
&lt;P&gt;Check current fragmentation: SELECT avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats;&lt;/P&gt;
&lt;P&gt;Reorganize (5–30%): ALTER INDEX ix ON tbl REORGANIZE&lt;/P&gt;
&lt;P&gt;Rebuild (&amp;gt;30%): ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAXDOP=1/4/8);&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Above numbers (5 % to 30 % REORG / &amp;gt; 30 % REBUILD) are just guidelines in many cases you may not need index REBUILD due to latest storage technologies or you may need to perform REBUILD after &amp;gt; 60-70 % fragmentation. Also, you may not need REORG. STATS update needs to be done diligently for query performance improvement. Perform through test compare before and after execution plans to derive conclusion on this.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Large Index Rebuild special handling:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;ALTER INDEX ix ON tbl REBUILD WITH (ONLINE=ON , RESUMABLE = ON, MAX_DURATION = 60,MAXDOP=1/4/8);&lt;/P&gt;
&lt;P&gt;-- Pause after current batch&lt;/P&gt;
&lt;P&gt;ALTER INDEX ix ON tbl PAUSE;&lt;/P&gt;
&lt;P&gt;-- Resume with lower DOP&lt;/P&gt;
&lt;P&gt;ALTER INDEX ix ON tbl RESUME WITH (MAXDOP = 2, MAX_DURATION = 60);&lt;/P&gt;
&lt;P&gt;-- Abort and roll back if required&lt;/P&gt;
&lt;P&gt;ALTER INDEX ix ON tbl ABORT;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Monitor progress of Index rebuild operation using below query:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;SELECT r.session_id, r.command, r.percent_complete, r.start_time, r.estimated_completion_time&lt;/P&gt;
&lt;P&gt;FROM sys.dm_exec_requests r&lt;/P&gt;
&lt;P&gt;WHERE r.command LIKE '%INDEX%';&lt;/P&gt;
&lt;P&gt;Perform Index Rebuild using example below approaches:&lt;/P&gt;
&lt;P&gt;a)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Ola Hallengren maintenance Scripts :&amp;nbsp;&lt;A href="https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Or&lt;/P&gt;
&lt;P&gt;b)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Adaptive Index Defragmentation: &lt;A href="https://github.com/Microsoft/tigertoolbox/tree/master/AdaptiveIndexDefrag" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Or&lt;/P&gt;
&lt;P&gt;c)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;How to maintain Azure SQL Indexes and Statistics :&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/how-to-maintain-azure-sql-indexes-and-statistics/368787" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Automating Azure SQL DB index and statistics maintenance using Azure Automation : &lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/automating-azure-sql-db-index-and-statistics-maintenance-using-azure-automation/368974" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;COPY / QUIESCE utilities&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Consistent copy state&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Not needed (transactionally consistent snapshots)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Not Applicable&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 115px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;CHECK DATA / CHECK INDEX&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;Structural consistency&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;DBCC CHECKDB&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Best Practice)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;Monthly or Weekly (off-peak)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 115px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure SQL Database automatically runs internal consistency checks, but you may still run it manually if you want:&lt;/P&gt;
&lt;P&gt;T-SQL Query: DBCC CHECKDB(DatabaseName);&lt;/P&gt;
&lt;P&gt;For large DB: run on geo-secondary or named replica then review&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;RUN QUERY EXPLAIN SNAPSHOT&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Capture access paths&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Query Store captures execution plans&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform + You (Analysis)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Ongoing&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;Ensure Query Store ON with Operation Mode DEFAULT i.e. READ_WRITE (collects and persists query stats).&lt;/LI&gt;
&lt;LI&gt;Easy analysis of Query store details using SSMS&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;REBIND Packages&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Refresh access plans&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Not needed (no static package binding)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Not Applicable&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Catalog Statistics Maintenance&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Optimizer health&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;System metadata auto maintained&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Buffer Pool Sizing (BP0/BP32K etc.)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Memory tuning&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Managed by service tier&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Storage Space Preallocation&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Avoid space issues&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Storage allocated dynamically (i.e. auto-grow page servers)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Monitor size: sys.database_files; optionally purge/archive data&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Archive Log Space Monitoring&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Prevent log fill&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Log scaling&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform (capacity) + You (workload)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Monitor during bursts&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;sys.dm_db_resource_stats; watch log_write_percent&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 271px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;P&gt;Partition Maintenance (ROLL-IN/OUT)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;P&gt;Lifecycle management&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;P&gt;Table partitioning (if used)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Conditional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;P&gt;During data lifecycle events&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 271px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;Switching Partitions (Fast Load/Unload) : Move data in/out of a partitioned table without large DELETE/INSERT operations:&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;ALTER TABLE [PartitionedTable] SWITCH&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;PARTITION N TO [StagingTable];&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Rebuilding/Defragmenting Indexes per Partition&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;Instead of rebuilding the entire table, you can rebuild indexes on a single partition, which is faster and less blocking:&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;ALTER INDEX [IX_YourIndex] ON [YourPartitionedTable] REBUILD PARTITION = N&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;WITH (ONLINE = ON);&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 125px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;Compression (ROW/PAGE) MGMT&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;Space and IO&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;ROW / PAGE / ColumnStore compression (if enabled)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Optional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;At design / periodic review&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 125px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Row compression&lt;/STRONG&gt; stores fixed-length data as variable-length, ideal for CHAR, INT, etc.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Page compression&lt;/STRONG&gt; adds prefix and dictionary compression for repeated values in large tables.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Columnstore compression&lt;/STRONG&gt; stores data column-wise for huge analytical tables with millions of rows.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 207px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;Security: RACF/External Auth Integration&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;Access control&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;Microsoft Entra ID Auth, Managed Identities, Microsoft Entra Service Principal&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Needed)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;At onboarding + periodic review&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 207px; border-width: 0.5px;"&gt;
&lt;P&gt;Example authentication using Microsoft Entra ID:&lt;/P&gt;
&lt;P&gt;DECLARE @EntraIDUser NVARCHAR(128) = 'john@contoso.com';&lt;/P&gt;
&lt;P&gt;CREATE USER [@EntraIDUser] FROM EXTERNAL PROVIDER;&lt;/P&gt;
&lt;P&gt;ALTER ROLE db_datareader ADD MEMBER [@EntraIDUser];&lt;/P&gt;
&lt;P&gt;ALTER ROLE db_datawriter ADD MEMBER [@EntraIDUser];&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Encryption (Dataset / Log)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Compliance&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;TDE auto-enabled&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;N/A&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Optionally manage customer-managed keys (CMK) via Key Vault&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Auditing / SMF / IFI&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Compliance logging&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure SQL Auditing / Defender&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Needed)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Enable once; review monthly&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Azure SQL Database / Managed Instance&lt;/STRONG&gt; supports auditing at the &lt;STRONG&gt;server&lt;/STRONG&gt; or &lt;STRONG&gt;database&lt;/STRONG&gt; level. Audits can be sent to &lt;STRONG&gt;Log Analytics&lt;/STRONG&gt;, &lt;STRONG&gt;Storage Account&lt;/STRONG&gt;, or &lt;STRONG&gt;Event Hub&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 235px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;Performance Trace (IFCID)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;Problem diagnostics&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;Query Store + Extended Events, &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-sql/database-watcher-overview?view=azuresql&amp;amp;tabs=americas" target="_blank" rel="noopener"&gt;Database Watcher&lt;/A&gt;, DMVs, Azure Portal Monitoring&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Conditional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;When diagnosing issues&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 235px; border-width: 0.5px;"&gt;
&lt;P&gt;Example: Use SSMS to browse Query Store view; Use Azure Portal to monitor performance; or use DMVs for detailed troubleshooting.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 275px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;P&gt;Job Scheduling (JCL)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;P&gt;Orchestrate utilities&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure Automation /&lt;/P&gt;
&lt;P&gt;Elastic Jobs in Azure SQL DB/&lt;/P&gt;
&lt;P&gt;ADF Pipelines /&lt;/P&gt;
&lt;P&gt;Azure Logic Apps / Azure Functions&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;P&gt;You (As per need)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;P&gt;Per task (daily/weekly)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 275px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Automation&amp;nbsp;&lt;/STRONG&gt;– Serverless runbooks or scripts run in the cloud using managed identities or credentials.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Elastic Job in Azure SQL DB - &lt;/STRONG&gt;You can create and schedule elastic jobs that could be periodically executed against one or many Azure SQL databases to run Transact-SQL (T-SQL) queries and perform maintenance tasks.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;ADF / Fabric Pipelines&lt;/STRONG&gt; – Orchestrate queries and maintenance as scheduled or triggered pipeline activities.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Logic Apps / Functions&lt;/STRONG&gt; – Event-driven query execution in response to timers or Azure events.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Index Design Advisor&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Workload tuning&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Automatic Tuning (create/drop indexes)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform (optional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;Continuous&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 123px; border-width: 0.5px;"&gt;
&lt;P&gt;ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (CREATE_INDEX=ON, DROP_INDEX=ON)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Plan Regression Detection&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Stability&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Automatic Plan Correction&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform (if enabled)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Continuous&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;ALTER DATABASE CURRENT SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN=ON)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 151px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;P&gt;HA / DR (Dual site, GDPS)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;P&gt;Availability&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;P&gt;Built-in HA + Geo-Replication / Failover Groups&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;P&gt;Platform + You (DR config)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;P&gt;Configure once; drills semi-annual&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 151px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;You can create up to 4 HA replicas. HA replica uses same page servers as the primary replica, so no data copy is required to add an HA replica. Information for High Availability replica is present at documentation:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale-replicas?view=azuresql#high-availability-replica" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Named replica just like an HA uses the same page servers as the primary replica. Named replica can have their own SLO and you can create up to 30 named replicas for read scale out purpose. Create Hyperscale named replica by following documentation:&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/hyperscale-named-replica-configure?view=azuresql&amp;amp;tabs=portal#create-a-hyperscale-named-replica" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;For DR replica easily configure geo replication / failover group for Azure SQL Database as explained at location :&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/failover-group-configure-sql-db?view=azuresql&amp;amp;tabs=azure-portal%2Cazure-powershell-manage&amp;amp;pivots=azure-sql-single-db" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 130px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;Backup Retention / Offsite Vault&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;Long-term retention&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;LTR (Long-Term Retention)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Optional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;Configure at migration; review annually&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 130px; border-width: 0.5px;"&gt;
&lt;P&gt;Set LTR via&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure Portal or&lt;/LI&gt;
&lt;LI&gt;Azure CLI : az sql db ltr-policy set or&lt;/LI&gt;
&lt;LI&gt;Azure Powershell : Set-AzSqlDatabaseBackupLongTermRetentionPolicy&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 127px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;P&gt;Monitoring (OMEGAMON, Tivoli)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;P&gt;Health &amp;amp; capacity&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;P&gt;Azure Monitor / Log Analytics / Database Watcher&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Needed)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;P&gt;Daily dashboard; alert reaction&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 127px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;Using Azure Monitor create Alerts on CPU %, Memory %, Transaction Log throughput %, Storage %, blocking, failed logins etc.&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database-watcher-overview?view=azuresql&amp;amp;tabs=americas" target="_blank" rel="noopener"&gt;Database Watcher&lt;/A&gt; collects in-depth workload monitoring data to give you a detailed view of database performance, configuration, and health.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 138px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;P&gt;Deadlock / Lock Escalation Review&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;P&gt;Concurrency tuning&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;P&gt;Extended Events + DMVs&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Conditional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;P&gt;Investigate alerts&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 138px; border-width: 0.5px;"&gt;
&lt;UL&gt;
&lt;LI&gt;sys.dm_tran_locks: Current (transient) lock inventory. Useful for spotting patterns leading to deadlocks (e.g., two sessions holding incompatible locks in different order) and for capturing blocking chains before a deadlock forms. It does NOT retain history; deadlocks often resolve in milliseconds, so you rarely see the actual deadlock moment here.&lt;/LI&gt;
&lt;LI&gt;system_health Extended Event: Always running; captures xml_deadlock_report events with a full deadlock graph (processes, resource nodes, lock modes, victim). Gives you post‑mortem detail even if you missed it live.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 120px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;Batch Window Management&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;Avoid contention&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;Scale compute / workload smoothing&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Best Practice)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;Before large ETL&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 120px; border-width: 0.5px;"&gt;
&lt;P&gt;If required, scale up database resources before batch execution to shorten batch execution:&lt;/P&gt;
&lt;P&gt;ALTER DATABASE ... MODIFY (SERVICE_OBJECTIVE='HS_Gen5_8');&lt;/P&gt;
&lt;P&gt;scale down database resources after batch completion&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 266px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;ETL Load&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;Faster bulk load&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;Minimal logging depends on model (Always FULL)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Conditional)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;During large loads&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 266px; border-width: 0.5px;"&gt;
&lt;P&gt;On large data load to a given table (&amp;gt; 100 GB)&lt;/P&gt;
&lt;P&gt;Preferably:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&amp;nbsp;Keep Clustered Column Store Indexes on table as it is. Use batch size of &amp;gt;102, 400 rows for better performance.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;Drop Clustered Row store and non-clustered indexes before load if possible and recreate them after load.&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;While recreating indexes on large tables use these options : ONLINE = ON, RESUMABLE = ON, MAXDOP = 4 / 8&lt;/LI&gt;
&lt;LI&gt;&amp;nbsp;Monitor the progress of index creation using:&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;SELECT object_name(object_id) AS TableName, index_id,&amp;nbsp; percent_complete,&amp;nbsp; state_desc FROM sys.index_resumable_operations&lt;/P&gt;
&lt;P&gt;WHERE state_desc = 'IN_PROGRESS';&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Object Ownership / Schema Sync&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Governance&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;VS Code extension&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;You (Needed)&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;On deployment&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Schema compare between Db2 and SQL&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;ADF based solution for Db2 and SQL Schema comparison:&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/database-schema-compare-tool/4118537" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Schema compare between Azure SQL databases&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The schema comparison tooling enables you to compare two database definitions:&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/schema-comparison?view=sql-server-ver17&amp;amp;pivots=sq1-visual-studio" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Database Comparison&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Governance&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;Database Compare Utility&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;You&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;On-demand&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-left lia-border-color-21 lia-vertical-align-top" style="height: 95px; border-width: 0.5px;"&gt;
&lt;P&gt;If you want to compare data between Db2 and Azure SQL DB HS you can use Database Compare Utility which is available for download at location : &lt;A href="https://www.microsoft.com/en-us/download/details.aspx?id=103016&amp;amp;msockid=1996a9980f02689d0a7cbfe00ea069a8" target="_blank" rel="noopener"&gt;link&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;&lt;STRONG&gt;Minimal Practical Post-Migration Maintenance Set (Typical Cadence)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Daily&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Review alerts (performance, error rates, failed logins)&lt;/LI&gt;
&lt;LI&gt;Optional lightweight index fragmentation check if workload highly write-intensive&lt;/LI&gt;
&lt;LI&gt;Monitor Query Store for top regressions&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;After Bulk Loads&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;UPDATE STATISTICS table WITH FULLSCAN (only for large changes)&lt;/LI&gt;
&lt;LI&gt;Consider index rebuild if fragmentation spiked&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Weekly&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Targeted&lt;/STRONG&gt; index maintenance (only fragmented ones)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Targeted&lt;/STRONG&gt; statistics maintenance (only for stale stats)&lt;/LI&gt;
&lt;LI&gt;Query Store review: force known good plan if auto correction off&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Monthly&lt;/STRONG&gt; (or quarterly for very large DB):&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;DBCC CHECKDB (prefer off-peak or on a geo-secondary). If database size is large, perform this operation on business-critical tables or use specific options like PHYSICAL_ONLY.&lt;/LI&gt;
&lt;LI&gt;Security/audit review&lt;/LI&gt;
&lt;LI&gt;Cost/compute tier right-sizing&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Annual / Semi-annual&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;DR failover test (Failover Group)&lt;/LI&gt;
&lt;LI&gt;LTR retention policy review&lt;/LI&gt;
&lt;LI&gt;Compression strategy review&lt;/LI&gt;
&lt;LI&gt;Automatic Tuning effectiveness assessment&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Migration Mindset Tips&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Don’t lift-and-shift Db2 utility cadence; you might over-maintain and waste resources.&lt;/LI&gt;
&lt;LI&gt;Replace "daily job list" with "monitor + exception-based actions."&lt;/LI&gt;
&lt;LI&gt;For very large Hyperscale DBs, consider named replicas for offloading DBCC CHECKDB or reporting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H6&gt;&lt;STRONG class="lia-align-justify"&gt;&lt;SPAN data-teams="true"&gt;Disclaimer: The guidance, recommendations, and examples provided in this blog are based on our experience with migrating Db2 from Mainframe to Azure SQL Database Hyperscale and may not be universally applicable to all environments. Every customer’s workload, configuration, and performance characteristics are unique. &lt;STRONG class="lia-align-justify" data-start="455" data-end="603"&gt;You should thoroughly test and validate these recommendations in your own development or staging environment before applying them in production. &lt;/STRONG&gt;We do not assume any responsibility or liability for any issues, downtime, or impacts resulting from the use of the information in this blog.&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H6&gt;
&lt;H3&gt;&lt;STRONG&gt;Feedback and suggestions&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please send an email to&amp;nbsp;&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;Database Platform Engineering Team&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 17:05:12 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/redefining-database-maintenance-after-migrating-from-db2-on/ba-p/4471207</guid>
      <dc:creator>Sandip-Khandelwal</dc:creator>
      <dc:date>2026-03-25T17:05:12Z</dc:date>
    </item>
    <item>
      <title>Azure SQL’s Native JSON Type: Optimized for Performance</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/azure-sql-s-native-json-type-optimized-for-performance/ba-p/4486952</link>
      <description>&lt;H2&gt;Introduction&lt;/H2&gt;
&lt;P&gt;JSON has become the de-facto format for modern applications from web APIs to microservices and event-driven systems. Azure SQL has supported JSON for years, but JSON was treated just like text (stored as nvarchar or varchar). That meant every query involving JSON required parsing, which could get expensive as data volume grew.&lt;/P&gt;
&lt;P&gt;The new &lt;STRONG&gt;native JSON binary type&lt;/STRONG&gt; changes that story. Instead of saving JSON as raw text, Azure SQL can store it in a binary representation that’s optimized for fast reads, efficient in-place updates, and compact storage. You get the flexibility of JSON with performance that behaves more like a structured column.&lt;/P&gt;
&lt;P&gt;Learn more about the JSON data type in the documentation –&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Textual data format - &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver17" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver17&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Native binary format - &lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/data-types/json-data-type?view=sql-server-ver17" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/sql/t-sql/data-types/json-data-type?view=sql-server-ver17&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;BR /&gt;A few useful things to know upfront:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;JSON data is stored in a &lt;STRONG&gt;native binary format&lt;/STRONG&gt;, not as plain text&lt;/LI&gt;
&lt;LI&gt;Reads are faster because the JSON is &lt;STRONG&gt;already parsed&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Improved write efficiency, since queries can &lt;STRONG&gt;update individual values &lt;/STRONG&gt;without accessing the entire document.&lt;/LI&gt;
&lt;LI&gt;Storage is &lt;STRONG&gt;more compact, &lt;/STRONG&gt;optimized for compression&lt;/LI&gt;
&lt;LI&gt;Existing JSON functions continue to work so &lt;STRONG&gt;app changes are minimal&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Internally, JSON is stored using &lt;STRONG&gt;UTF-8 encoding&lt;/STRONG&gt; (Latin1_General_100_BIN2_UTF8)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This blog shares the performance gains observed after migrating JSON from nvarchar/varchar to the native JSON binary type. Results will vary across JSON structures and workloads, so consider this a guide rather than a universal benchmark.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; The purpose of this blog is to introduce the native JSON binary data type. We are not covering JSON indexes or JSON functions at this time in order to maintain clarity and focus.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Test Environment Details:&lt;/H2&gt;
&lt;P&gt;To measure the performance impact of migrating JSON from nvarchar/varchar to the native JSON binary type, a test environment was set up with six tables on Azure SQL Database (General Purpose, Gen5, 2 vCores).&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note: &lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;The dataset used in the testing was generated using AI for demonstration purposes.&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;JSON data stored as nvarchar/varchar data types:&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-center"&gt;&lt;table class="lia-border-color-21 lia-border-style-solid" border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Table Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Number of Records&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Size (GB)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;InventoryTrackingJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;400,003&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;4.21&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;OrderDetailsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;554,153&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;1.29&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;CustomerProfileJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P class="lia-align-center"&gt;55,001&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;0.16&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;ProductCatalogJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;100,001&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;0.10&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;SalesAnalyticsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;10,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;0.08&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-align-left lia-border-color-21"&gt;
&lt;P&gt;EmployeeRecordsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;5,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;0.02&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Total database size:&lt;/STRONG&gt; &lt;STRONG&gt;5.94 GB&lt;/STRONG&gt; (59.43% used), based on a maximum configured size of 10 GB, with JSON stored as nvarchar/varchar.&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example schema (OrderDetailsJSON)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;One of the core tables used in testing:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE TABLE JSONTest.OrderDetailsJSON (
    OrderDetailID INT IDENTITY(1,1) PRIMARY KEY,
    OrderMetadata NVARCHAR(MAX),           -- JSON: order info,source, salesperson
    ShippingDetails NVARCHAR(MAX),         -- JSON: carrier, priority, addresses
    CustomizationOptions NVARCHAR(MAX),    -- JSON: customizations and add-ons
    CreatedDate DATETIME2 DEFAULT SYSDATETIME(),
    ModifiedDate DATETIME2 DEFAULT SYSDATETIME()
);
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;BR /&gt;Each JSON column simulated realistic business structure - for example:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;OrderMetadata&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
  "orderSource": "Mobile App",
  "salesPerson": "Jane Smith",
  "orderDate": "2025-11-14T10:30:00Z",
  "customerType": "Premium"
}
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;ShippingDetails&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
  "carrier": "FedEx",
  "priority": "standard",
  "address": { "city": "Anytown", "state": "CA" }
}
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;CustomizationOptions&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="json"&gt;{
  "color": "Green",
  "size": "Medium",
  "giftWrap": true
}
&lt;/LI-CODE&gt;
&lt;H2&gt;Performance before migration:&lt;/H2&gt;
&lt;P&gt;To measure performance differences accurately, a continuous &lt;STRONG&gt;12-minute test session&lt;/STRONG&gt; was run. The load sizes referenced in the results (500, 1K, 2.5K, 5K, 10K, and 25K) represent the number of records read, and each record goes through the following operations:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Multiple JSON_VALUE extractions&lt;/LI&gt;
&lt;LI&gt;JSON validation using ISJSON&lt;/LI&gt;
&lt;LI&gt;Safe type conversions using TRY_CONVERT&lt;/LI&gt;
&lt;LI&gt;Aggregation logic&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;During the 12-minute continuous workload, JSON stored as nvarchar/varchar showed consistent resource pressure, primarily on CPU and storage IO. The monitoring tools reported:&lt;/P&gt;
&lt;img&gt;&lt;STRONG&gt;&lt;EM&gt;Disclaimer:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; These results are for illustration purposes only. Actual performance will vary depending on system hardware (CPU cores, memory, disk I/O), database configurations, network latency, and table structures. We recommend validating performance in dev/test to establish a baseline.&lt;/EM&gt;&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;BR /&gt;&lt;BR /&gt;Data migration to native JSON binary data type&lt;/H2&gt;
&lt;P&gt;For testing, native JSON columns were added to the existing tables, and JSON data stored in nvarchar/varchar columns was migrated to the new native JSON binary columns using the CAST function.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Migration Script (example used for all tables)&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Add native JSON columns&lt;BR /&gt;&lt;LI-CODE lang="sql"&gt;ALTER TABLE JSONTest.OrderDetailsJSON ADD OrderMetadata_native JSON, ShippingDetails_native JSON, CustomizationOptions_native JSON;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Migrate existing NVARCHAR/VARCHAR JSON into native JSON&lt;BR /&gt;&lt;LI-CODE lang="sql"&gt;UPDATE JSONTest.OrderDetailsJSON SET OrderMetadata_native = CAST(OrderMetadata AS JSON), ShippingDetails_native = CAST(ShippingDetails AS JSON), CustomizationOptions_native = CAST(CustomizationOptions AS JSON);&lt;/LI-CODE&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; After validating that the migrated data was consistent, the original nvarchar/varchar JSON columns were dropped. A &lt;STRONG&gt;rebuild index operation&lt;/STRONG&gt; was then performed to remove fragmentation and reclaim space, ensuring that the subsequent storage comparison reflected the true storage footprint of the native JSON binary type.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;The same pattern was repeated for all tables.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Storage footprint after migration:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-color-21 lia-border-style-solid" border="1" style="width: 642px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Table Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Number of Records&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Size_Before (GB)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;&lt;STRONG&gt;Size_After (GB)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;InventoryTrackingJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;400,003&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;4.21&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.60&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;OrderDetailsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;554,153&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;1.29&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.27&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;ProductCatalogJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;100,001&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.16&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.11&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;SalesAnalyticsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;10,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.10&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.04&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;CustomerProfileJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;55,001&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.08&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.01&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="lia-border-color-21"&gt;
&lt;P&gt;EmployeeRecordsJSON&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;5,000&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.02&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21"&gt;
&lt;P&gt;0.00&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;BR /&gt;Total&amp;nbsp;&lt;STRONG&gt;database size: 1.06 GB&lt;/STRONG&gt; (10.64% used), based on a maximum configured size of 10 GB, with JSON in native binary data type.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;After migrating all JSON columns from nvarchar/varchar to the native JSON type, the total database size dropped from 5.94 GB to 1.06 GB - an &lt;STRONG&gt;~82% reduction in storage&lt;/STRONG&gt;.&lt;/P&gt;
&lt;H2&gt;Performance after migration&lt;/H2&gt;
&lt;P&gt;After moving all JSON columns from nvarchar/varchar to native JSON, the exact same 12-minute workload was rerun - same query patterns, same workload distribution. Only the JSON storage format was different. Here are the results:&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;Key Metrics (Before vs. After)&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;The migration didn’t just shrink storage - it made JSON workloads easier for the engine to process. With the native JSON type, the same workload completed with &lt;STRONG&gt;~27% lower CPU&lt;/STRONG&gt; and &lt;STRONG&gt;~80% lower Data IO.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;Query duration, Throughput, &amp;amp; Logical Reads&lt;/H2&gt;
&lt;H3&gt;Query duration&lt;/H3&gt;
&lt;P&gt;A comparison was conducted using the same workload, dataset, indexes, and query patterns - with the &lt;STRONG&gt;only variable being the JSON storage format&lt;/STRONG&gt;. The outcome showed a clear trend in query duration.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Across every single load level,&amp;nbsp;&lt;STRONG&gt;native JSON cut query duration by 2.5x - 4x&lt;/STRONG&gt;. Even more interesting: as the workload scaled 50x, &lt;STRONG&gt;native JSON latency stayed almost flat&lt;/STRONG&gt;, while text JSON steadily slowed down.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; The duration values shown represent the average across multiple runs within the performance test described earlier.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H3&gt;Throughput improvement&lt;/H3&gt;
&lt;P&gt;The benefits also translated directly into throughput. Overall, native JSON enabled &lt;STRONG&gt;20x to 40x more records processed per second (rps)&lt;/STRONG&gt;. For example:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-border-color-21 lia-border-style-solid" border="1" style="width: 49.537%; height: 154.667px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Load&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Throughput Before (rps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Throughput After (rps)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td class="lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;Small load&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~60&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~240&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td class="lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;High load&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~690&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~2300&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.6667px;"&gt;&lt;td class="lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;Peak load&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~1360&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center lia-border-color-21" style="height: 38.6667px;"&gt;
&lt;P&gt;~4700&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;Logical reads improvement&lt;/H3&gt;
&lt;P&gt;Native JSON significantly reduced I/O work as well:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Logical reads per run dropped from &lt;STRONG&gt;~168,507 → ~33,880&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;An &lt;STRONG&gt;~80% reduction in pages read&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Lower logical reads directly correlate with improved scalability - fewer pages scanned means less work required to serve each request, especially under increasing load.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;Sample results:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; JSON (nvarchar/varchar)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; JSON (native binary)&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Cache management&lt;/H3&gt;
&lt;P&gt;To ensure the performance improvement was not simply a result of native JSON fitting more easily in memory, the test cleared the cache at regular intervals using &lt;EM&gt;DBCC DROPCLEANBUFFERS&lt;/EM&gt;, forcing repeated cold-start execution. As expected, query duration increased immediately after each cache clear for both text JSON and native JSON, yet the relative benefit remained consistent: native JSON continued to show a 2.5x–4x reduction in duration across all load levels. This confirms that the gains are not due to buffer pool residency alone, but from reduced JSON parsing work during execution.&lt;BR /&gt;&lt;BR /&gt;For example, in the chart below for the small load, runs &lt;STRONG&gt;3&lt;/STRONG&gt; and &lt;STRONG&gt;6&lt;/STRONG&gt; were executed right after clearing cache. Although both formats show higher duration, the relative performance advantage remains unchanged.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;&amp;nbsp;&lt;/H2&gt;
&lt;H2&gt;Conclusion&lt;/H2&gt;
&lt;P&gt;Native JSON storage in Azure SQL isn’t just a new way to store semi-structured data - it delivers &lt;STRONG&gt;tangible performance and efficiency gains&lt;/STRONG&gt;. In our case, migrating JSON from NVARCHAR to the new binary JSON type resulted in:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If your workload involves frequent reading or updating of JSON documents - especially large or deeply nested ones, the native JSON type is worth evaluating. Your gains may vary based on JSON structure, indexing strategy, and workload patterns - but the benefits of&amp;nbsp;&lt;STRONG&gt;eliminating repeated text parsing + reducing storage cost&lt;/STRONG&gt; are difficult to ignore.&lt;/P&gt;
&lt;P&gt;As SQL workloads continue to blend structured and semi-structured data, native JSON brings Azure SQL more in line with modern application design while preserving the maturity and stability of the relational engine.&lt;/P&gt;
&lt;H5&gt;Feedback and Suggestions&lt;/H5&gt;
&lt;P&gt;If you have feedback or suggestions, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note: For additional information about migrating various source databases to Azure, see the&amp;nbsp;&lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide.&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Feb 2026 09:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/azure-sql-s-native-json-type-optimized-for-performance/ba-p/4486952</guid>
      <dc:creator>ShrustiKolsur</dc:creator>
      <dc:date>2026-02-09T09:00:00Z</dc:date>
    </item>
    <item>
      <title>Handling Sybase BIGTIME Data Type During Migration to Azure SQL</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/handling-sybase-bigtime-data-type-during-migration-to-azure-sql/ba-p/4480157</link>
      <description>&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc297286694"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc11766963"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866210"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Introduction&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc11766964"&gt;&lt;/A&gt;Migrating databases from Sybase to SQL Server or Azure SQL is a common modernization scenario. However, not all Sybase data types have direct equivalents in SQL Server, and one such challenge is the BIGTIME data type. The BIGTIME data type in Sybase stores time-of-day values with microsecond precision (format: hh:mm:ss.SSSSSS). It is commonly used in applications that require high-precision time tracking.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;To unblock and accelerate this conversion, we have developed an script&amp;nbsp;&lt;STRONG&gt;(sybase_bigtime_migration.sh)&lt;/STRONG&gt; that provides automation to migrate schemas from Sybase ASE to SQL Server specifically where tables contain the BIGTIME datatype. It systematically discovers affected tables, then generates ALTER statements to convert BIGTIME columns to SQL Server’s TIME (6) with a controlled, auditable flow.&lt;/P&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866211"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;General Guidelines&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;The purpose of this blog is to provide end‑to‑end flow for discovering BIGTIME columns in Sybase and converting them to SQL Server’s TIME (6). Run the scripts on a host that has Sybase ASE installed and running and SQL Server tools ("sqlcmd") installed and available on the PATH. Provide accurate connection details, passwords are read securely without echoing to the terminal.&lt;/P&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866212"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Functionality of the scripts&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;The script &lt;STRONG&gt;(sybase_bigtime_migration.sh)&lt;/STRONG&gt; validates and sources the Sybase environment, then locates "isql" to query system catalogs for tables with BIGTIME columns. It writes a clean, header-free list to "tablist.txt", ensuring a usable input for the next steps. For each table, it generates an individual ALTER script converting BIGTIME → TIME (6) so you can review or apply changes per object. When SQL migration is enabled, it detects "sqlcmd", tests connectivity, executes each ALTER script, and saves rich logs for verification and troubleshooting.&lt;/P&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866213"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Prerequisites&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;The script &lt;STRONG&gt;(sybase_bigtime_migration.sh)&lt;/STRONG&gt; must be executed from the same host where Sybase ASE is installed and running, to ensure reliable access to system catalogs and local client utilities. The schema conversion of all tables must be performed using &lt;A href="https://www.microsoft.com/en-us/download/details.aspx?id=54256" target="_blank" rel="noopener"&gt;SQL Server Migration Assistant (SSMA)&lt;/A&gt; prior to running this script, ensuring that all non-BIGTIME columns are properly migrated and aligned with Azure SQL standards.&lt;/P&gt;
&lt;P&gt;Ensure access to Sybase ASE instance with permissions to query metadata in "sysobjects", "syscolumns", and "systypes". If you plan to apply changes, you must have SQL Server client tools installed and permissions to run "ALTER TABLE" on the target database objects. Network connectivity from the host to both Sybase and SQL Server is required.&lt;/P&gt;
&lt;P&gt;If you want to run the script only for a specific set of BIGTIME tables in Sybase, create a file named tablist.txt in the same directory as the script. This file should contain the list of BIGTIME tables (one table name per line) that the script should process.&lt;/P&gt;
&lt;PRE&gt;&lt;U&gt;Sybase datatype:&lt;/U&gt;&lt;/PRE&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;PRE&gt;&lt;U&gt;Schema conversion using SSMA:&lt;/U&gt;&lt;/PRE&gt;
&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;
&lt;PRE&gt;&lt;U&gt;Azure SQL datatype after schema conversion using SSMA:&lt;/U&gt;&lt;/PRE&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866214"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;How to Use&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;Run the script &lt;STRONG&gt;(sybase_bigtime_migration.sh)&lt;/STRONG&gt; and provide Sybase server, username, password, and database when prompted. Choose whether to perform migration against SQL Server; if yes, supply SQL Server host, credentials, and database. After the detection step, confirm whether to proceed with all tables that have BIGTIME in the specified Sybase database. Selecting “yes” triggers script generation and optional application, selecting “no” exits after guidance, letting you tailor "tablist.txt" before rerunning.&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc2708575"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc2790239"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866215"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Output Files&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;"tablist_Final.txt"&lt;/STRONG&gt; output file contains the clean list of tables with BIGTIME columns and is regenerated on each run to reflect the current database. Each run writes an overall validation report, including per-table status and counts to &lt;STRONG&gt;"validation_summary_timestamp.log"&lt;/STRONG&gt; where valid=tables with BIGTIME columns, missing=tables not found in DB, no_bigtime=tables without BIGTIME columns, unverified=validations errors, total_tablist_count=total tables checked from "tablist.txt". Per table ALTER scripts are created as&amp;nbsp;&lt;STRONG&gt;"alter_&amp;lt;SYB_DB&amp;gt;_&amp;lt;TABLE&amp;gt;.sql",&lt;/STRONG&gt; enabling fine-grained review and targeted application. When executing against SQL Server, output logs are saved under &lt;STRONG&gt;"sql_outputs/alter_&amp;lt;SYB_DB&amp;gt;_&amp;lt;TABLE&amp;gt;.out"&lt;/STRONG&gt;. These logs assist with validating results, identifying failures.&lt;A class="lia-anchor" target="_blank" name="_Toc11766971"&gt;&lt;/A&gt;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;PRE&gt;&lt;U&gt;Final Azure SQL datatype output:&lt;/U&gt;&lt;/PRE&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866216"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Data Migration Strategy&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;After the schema conversion and BIGTIME data type handling are completed, the data migration should be performed as a separate activity. The migration can be executed using Azure Data Factory (ADF) or a custom BCP-based export and import process, based on factors such as data volume, performance requirements, and operational considerations. Separating schema preparation from data movement provides greater flexibility, improved control, and reduced risk during the data migration phase.&lt;/P&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-15"&gt;Steps to Download the script&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;Please send an email to the alias &lt;SPAN class="lia-text-color-20"&gt;&lt;A class="lia-external-url" href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;&lt;SPAN class="lia-text-color-15"&gt;datasqlninja@microsoft.com&lt;/SPAN&gt;&lt;/A&gt;,&lt;/SPAN&gt; and we will share the download link along with instructions.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc216866217"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-15"&gt;Feedback and suggestions&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A class="lia-external-url" href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide.&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Jan 2026 09:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/handling-sybase-bigtime-data-type-during-migration-to-azure-sql/ba-p/4480157</guid>
      <dc:creator>saikat_dey</dc:creator>
      <dc:date>2026-01-12T09:00:00Z</dc:date>
    </item>
    <item>
      <title>Implementing Oracle Autonomous Transactions in Azure SQL for Seamless Logging</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/implementing-oracle-autonomous-transactions-in-azure-sql-for/ba-p/4448130</link>
      <description>&lt;P&gt;In Oracle PL/SQL, the directive PRAGMA AUTONOMOUS_TRANSACTION allows a block of code such as a procedure, function, or trigger to run in its own independent transaction. This means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;It can &lt;STRONG&gt;commit or rollback&lt;/STRONG&gt; changes without affecting the main transaction.&lt;/LI&gt;
&lt;LI&gt;It is &lt;STRONG&gt;fully isolated&lt;/STRONG&gt; from the calling transactions, no shared locks or dependencies.&lt;/LI&gt;
&lt;LI&gt;It is ideal for &lt;STRONG&gt;logging, auditing, and error tracking&lt;/STRONG&gt;, where you want to preserve data even if the main transaction fails.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Example in Oracle PL/SQL&lt;/STRONG&gt;:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE OR REPLACE PROCEDURE log_error(p_message VARCHAR2) IS
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  INSERT INTO error_log (log_time, message) VALUES (SYSDATE, p_message);
  COMMIT;
END;&lt;/LI-CODE&gt;
&lt;P&gt;This ensures that the error log is saved even if the main transaction rolls back.&lt;/P&gt;
&lt;H3&gt;How to Implement Autonomous Transactions in Azure SQL&lt;/H3&gt;
&lt;P&gt;If you need logs that persist through rollbacks in Azure SQL, you can recreate the behavior of Oracle’s PRAGMA AUTONOMOUS_TRANSACTION using external logging via Azure Function&amp;nbsp;&lt;/P&gt;
&lt;UL data-line="12"&gt;
&lt;LI data-line="12"&gt;External logging via &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-invoke-external-rest-endpoint-transact-sql?view=sql-server-ver17&amp;amp;tabs=request-headers" target="_blank" rel="noopener"&gt;sp_invoke_external_rest_endpoint&lt;/A&gt; to an&amp;nbsp;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview" target="_blank" rel="noopener"&gt;Azure Function&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="15"&gt;This blog post shows concrete code for Oracle PL/SQL vs. T‑SQL, and provides a secure, sample Azure Function example.&lt;/P&gt;
&lt;UL data-line="18"&gt;
&lt;LI data-line="18"&gt;There’s no direct PRAGMA AUTONOMOUS_TRANSACTION in Azure SQL as on the date when this blog is written.&lt;/LI&gt;
&lt;LI data-line="19"&gt;To persist logs even if the current transaction rolls back, consider calling an external logger (e.g., Azure Function) using sp_invoke_external_rest_endpoint.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="22"&gt;Oracle vs. Azure SQL at a glance&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-align-left" border="1" style="width: 96.2963%; border-width: 1px; border-spacing: 5px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th style="padding: 5px;"&gt;Feature&lt;/th&gt;&lt;th style="padding: 5px;"&gt;Oracle (PRAGMA AUTONOMOUS_TRANSACTION)&lt;/th&gt;&lt;th style="padding: 5px;"&gt;Azure SQL (workarounds)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Native support&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Yes&lt;/td&gt;&lt;td style="padding: 5px;"&gt;No&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Isolation from caller txn&lt;/td&gt;&lt;td style="padding: 5px;"&gt;True&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Partial (via external logging)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Rollback independence&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Yes&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Yes (external logging only)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Complexity&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Low&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Moderate (extra components)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Logging persistence&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Immediate&lt;/td&gt;&lt;td style="padding: 5px;"&gt;External or deferred&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td style="padding: 5px;"&gt;Common use cases&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Audit, error logging, notifications&lt;/td&gt;&lt;td style="padding: 5px;"&gt;Audit, error logging, compliance&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3 data-line="33"&gt;External logging via Azure Function and sp_invoke_external_rest_endpoint&lt;/H3&gt;
&lt;P data-line="35"&gt;In SQL use this when the application might wrap your stored procedure in its own transaction. If the app rolls back after your proc finishes, any in-database transactions (including your logs) also roll back. An external logger runs outside the database transaction and remains durable.&lt;/P&gt;
&lt;H3 data-line="37"&gt;Prerequisites and notes&lt;/H3&gt;
&lt;UL data-line="38"&gt;
&lt;LI data-line="38"&gt;sp_invoke_external_rest_endpoint&amp;nbsp;is available in Azure SQL Database and Azure SQL Managed Instance. Ensure outbound network access to your Function endpoint.&lt;/LI&gt;
&lt;LI data-line="39"&gt;Prefer Azure AD (Managed Identity) auth for your Function endpoint. If you must use function keys, pass them via header (x-functions-key), not query strings.&lt;/LI&gt;
&lt;LI data-line="40"&gt;Set reasonable timeouts and add retry/backoff for transient failures.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-line="177"&gt;Minimal SQL prerequisites for Function App Managed Identity&lt;/H3&gt;
&lt;P&gt;Grant your Function App’s managed identity access to the database and tables:&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- In your Azure SQL Database (connected as an Entra admin) 
CREATE USER [&amp;lt;FUNCTION_APP_MI_NAME&amp;gt;] FROM EXTERNAL PROVIDER; 
ALTER ROLE db_datawriter ADD MEMBER [&amp;lt;FUNCTION_APP_MI_NAME&amp;gt;]; 
-- Or grant explicit permissions 
GRANT INSERT ON dbo.ErrorLogs TO [&amp;lt;FUNCTION_APP_MI_NAME&amp;gt;];&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3 data-line="42"&gt;T‑SQL: emit a log via REST&lt;/H3&gt;
&lt;LI-CODE lang="sql"&gt;CREATE OR ALTER PROCEDURE dbo.add_numbers
    @x INT,
    @y INT,
     INT OUTPUT
AS
BEGIN
    SET NOCOUNT ON;

    BEGIN TRY
        -- Log process start
        INSERT INTO dbo.ProcessLogs (LogMessage, ProcedureName)
        VALUES (CONCAT('Started adding numbers: ', @x, ' + ', @y), 'add_numbers');

        -- Add numbers
        SET  = @x + @y;

        -- Log process success
        INSERT INTO dbo.ProcessLogs (LogMessage, ProcedureName)
        VALUES (CONCAT('Successfully calculated sum = ', ), 'add_numbers');
    END TRY

    BEGIN CATCH
        -- Log error details locally
        INSERT INTO dbo.ErrorLogs (ErrorMessage, ErrorProcedure, ErrorLine)
        VALUES (ERROR_MESSAGE(), ERROR_PROCEDURE(), ERROR_LINE());

        -- Prepare JSON payload
        DECLARE @jsonPayload NVARCHAR(MAX) = N'{
            "LogMessage": "Some error occurred",
            "ProcedureName": "add_numbers",
            "ErrorMessage": "' + ERROR_MESSAGE() + N'"
        }';

        -- Call external REST endpoint (Azure Function)
        EXEC sp_invoke_external_rest_endpoint
            @method = 'POST',
            @url = 'https://yourfunctionapp.azurewebsites.net/api/WriteLog',
            @headers = '{"Content-Type": "application/json", "x-functions-key": "&amp;lt;FUNCTION_KEY&amp;gt;"}',
             = @jsonPayload;

        -- Re-throw original error to caller
        THROW;
    END CATCH;
END;
GO
&lt;/LI-CODE&gt;
&lt;H3 data-line="74"&gt;Azure Function: HTTP-triggered logger (C#)&lt;/H3&gt;
&lt;UL data-line="75"&gt;
&lt;LI data-line="75"&gt;Auth: use AuthorizationLevel.Function (or Azure AD), avoid Anonymous in production.&lt;/LI&gt;
&lt;LI data-line="76"&gt;DB access: use Managed Identity User to request an access token for Azure SQL. Once the request token is generated, next step is to insert logs/messages into a permanent log/error table.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Below is an end-to-end Function App code in C# which demonstrates above.&lt;/P&gt;
&lt;LI-CODE lang="csharp"&gt;using Azure.Core;
using Azure.Identity;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Data.SqlClient;
using Microsoft.Extensions.Logging;
using System;
using System.IO;
using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;

namespace FunctionApp_MI_HTTPRequest
{
    public static class ManagedInstance_Http_Request
    {
        private static readonly string SqlConnectionString = Environment.GetEnvironmentVariable("SqlConnectionString");

        public class ProcedureLog
        {
            public string ProcedureName { get; set; }
            public string LogMessage { get; set; }
        }

        [FunctionName("WriteLog")]
        public static async Task&amp;lt;IActionResult&amp;gt; Run(
            [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("WriteLog invoked.");

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            log.LogInformation($"Received Log Request: {requestBody}");

            ProcedureLog logData;

            try
            {
                logData = JsonSerializer.Deserialize&amp;lt;ProcedureLog&amp;gt;(requestBody, new JsonSerializerOptions
                {
                    PropertyNameCaseInsensitive = true
                });

                if (logData == null || string.IsNullOrWhiteSpace(logData.ProcedureName))
                {
                    return new BadRequestObjectResult(new
                    {
                        status = "Error",
                        message = "Invalid log data."
                    });
                }
            }
            catch (JsonException)
            {
                return new BadRequestObjectResult(new
                {
                    status = "Error",
                    message = "Invalid JSON format."
                });
            }

            try
            {
                // Acquire access token for Azure SQL using Managed Identity
                var tokenCredential = new DefaultAzureCredential();
                var accessToken = await tokenCredential.GetTokenAsync(
                    new TokenRequestContext(new[] { "https://database.windows.net/.default" }),
                    CancellationToken.None);

                using var conn = new SqlConnection(SqlConnectionString)
                {
                    AccessToken = accessToken.Token
                };

                await conn.OpenAsync();
                using var transaction = conn.BeginTransaction();

                var query = @"
                    INSERT INTO dbo.ErrorLogs (ErrorMessage, ErrorProcedure)
                    VALUES (@LogMessage, @ProcedureName);";

                try
                {
                    using var cmd = new SqlCommand(query, conn, transaction);
                    cmd.Parameters.AddWithValue("@LogMessage", (object?)logData.LogMessage ?? string.Empty);
                    cmd.Parameters.AddWithValue("@ProcedureName", logData.ProcedureName);

                    await cmd.ExecuteNonQueryAsync();
                    transaction.Commit();

                    log.LogInformation($"Log inserted: {logData.ProcedureName} | {logData.LogMessage}");

                    return new OkObjectResult(new
                    {
                        status = "Success",
                        message = "Log inserted successfully."
                    });
                }
                catch (SqlException ex)
                {
                    transaction.Rollback();
                    log.LogError(ex, "Database transaction failed.");

                    return new ObjectResult(new
                    {
                        status = "Error",
                        message = "Database transaction failed."
                    })
                    { StatusCode = 500 };
                }
            }
            catch (Exception ex)
            {
                log.LogError(ex, "Internal Server Error");

                return new ObjectResult(new
                {
                    status = "Error",
                    message = "Internal Server Error"
                })
                { StatusCode = 500 };
            }
        }
    }
}
&lt;/LI-CODE&gt;
&lt;H4 data-line="188"&gt;Calling the Function from T‑SQL&lt;/H4&gt;
&lt;P data-line="189"&gt;Header-based key (preferred):&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;DECLARE @jsonPayload NVARCHAR(MAX) = N'{
    "LogMessage": "Some error occurred",
    "ProcedureName": "ProcessData"
}';

EXEC sp_invoke_external_rest_endpoint
    @method  = 'POST',
    @url     = 'https://yourfunctionapp.azurewebsites.net/api/WriteLog',
    @headers = '{"Content-Type": "application/json", "x-functions-key": "&amp;lt;FUNCTION_KEY&amp;gt;"}',
     = @jsonPayload;
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;CREATE TABLE dbo.ErrorLogs
(
    Id             INT IDENTITY(1,1) PRIMARY KEY,
    ErrorMessage   NVARCHAR(MAX),
    ErrorProcedure NVARCHAR(255),
    ErrorLine      INT NULL,
    ErrorDateTime  DATETIME2 DEFAULT SYSDATETIME()
);
GO

CREATE TABLE dbo.ProcessLogs
(
    Id             INT IDENTITY(1,1) PRIMARY KEY,
    LogMessage     NVARCHAR(MAX),
    ProcedureName  NVARCHAR(255),
    LogDateTime    DATETIME2 DEFAULT SYSDATETIME()
);
GO
&lt;/LI-CODE&gt;
&lt;H2 data-line="277"&gt;Architecture diagram&lt;/H2&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-line="327"&gt;PL/SQL reference (Oracle)&lt;/H2&gt;
&lt;P&gt;For comparison, here’s the autonomous transaction pattern in Oracle. Note the autonomous pragma on the logger procedure.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- =========================================
-- Table: ERRORLOGS
-- =========================================
CREATE TABLE errorlogs (
    log_id       NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    procedure_nm VARCHAR2(100),
    error_msg    VARCHAR2(4000),
    log_time     TIMESTAMP DEFAULT SYSTIMESTAMP
);
/

-- =========================================
-- Procedure: LOG_ERROR (Autonomous Transaction Logger)
-- =========================================
CREATE OR REPLACE PROCEDURE log_error (
    p_procedure VARCHAR2,
    p_error     VARCHAR2
) IS
    PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
    INSERT INTO errorlogs (procedure_nm, error_msg)
    VALUES (p_procedure, p_error);

    COMMIT;
END;
/
-- =========================================
-- Procedure: ADD_NUMBERS (Sample Procedure)
-- =========================================
CREATE OR REPLACE PROCEDURE add_numbers (
    p_x   NUMBER,
    p_y   NUMBER,
    p_sum OUT NUMBER
) IS
BEGIN
    p_sum := p_x + p_y;
EXCEPTION
    WHEN OTHERS THEN
        log_error('add_numbers', SQLERRM);
        RAISE;
END;
/
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Troubleshooting and FAQ -&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why aren’t my logs saved after a rollback in Azure SQL?&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Inserts performed inside the same transaction are rolled back together. Use external logging via 'sp_invoke_external_rest_endpoint' to guarantee persistence regardless of the caller’s transaction.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Can I use CLR or linked servers for autonomous transactions in Azure SQL? &lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;No. CLR integration and linked servers for DML aren’t supported in Azure SQL Database. Prefer external REST endpoints.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;What are the security considerations for external logging?&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Avoid 'Anonymous' Functions. Prefer Azure AD; second-best is 'AuthorizationLevel.Function' with keys in headers. Validate payloads and avoid logging sensitive data unless encrypted. Store secrets in Key Vault.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;How do I monitor and validate the logging pipeline?&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use Application Insights in your Function. Emit structured logs for both success and failure. Configure alerts on failure rates and latency.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;What performance impact should I expect?&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&amp;nbsp;External calls add network latency. For high-volume scenarios, batch logs, use async fire-and-forget patterns on the app side, and set sensible timeouts.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;In summary, external logging offers durable visibility across services, making it ideal for distributed systems. Be sure to implement retries and validation, set up monitoring and alerts, and document the behavior clearly for your team to ensure everyone understands the trade-offs involved.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you have any feedback or suggestion, please reach out to us at &lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jan 2026 12:00:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/implementing-oracle-autonomous-transactions-in-azure-sql-for/ba-p/4448130</guid>
      <dc:creator>Vijay_Kumar</dc:creator>
      <dc:date>2026-01-05T12:00:00Z</dc:date>
    </item>
    <item>
      <title>Data Migration Strategies for Large-Scale Sybase to SQL Migrations Using SSMA, SSIS and ADF- Part 1</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/data-migration-strategies-for-large-scale-sybase-to-sql/ba-p/4455711</link>
      <description>&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc297286694"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974328"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;In today’s data-driven landscape, the migration of databases is a crucial task that requires meticulous planning and execution. Our recent project for migrating data from Sybase ASE to MSSQL set out to illuminate this process, using tools like Microsoft SQL Server Migration Assistant (SSMA),&amp;nbsp;Microsoft SQL Server&amp;nbsp;Integration Service (SSIS)&amp;nbsp;packages, and Azure Data Factory&amp;nbsp;(ADF).&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;We carefully designed our tests to cover a range of database sizes: 1GB, 10GB, 100GB, and 500GB. Each size brought its own set of challenges and valuable insights, guiding our migration strategies for a smooth transition.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For the smaller databases (1GB and 10GB), we utilized SSMA, which&amp;nbsp;demonstrated&amp;nbsp;strong efficiency and reliability in handling straightforward migrations. SSMA was particularly effective in converting database schemas and moving data with minimal complication. As we&amp;nbsp;scaled&amp;nbsp;up&amp;nbsp;larger datasets (100GB and 500GB), we incorporated SSIS packages alongside Azure Data Factory to address the increased complexity and ensure robust performance throughout the migration process.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Our exercises yielded important findings related to performance metrics such as data throughput, error rates during transfer, and overall execution times for each migration approach. These insights helped us refine our methodologies and underscored the necessity of selecting the right tools for each migration scenario when transitioning from Sybase to MSSQL.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Ultimately, our experience highlighted that thorough analysis is essential for identifying potential bottlenecks and optimizing workflows, enabling successful and efficient migrations across databases of all sizes. The results reassure stakeholders that, with a well-considered approach and comprehensive testing, migrations can be executed seamlessly while maintaining the integrity of all datasets involved.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;There are 2 parts in the Strategy, the 1st part covers the overview, source database environment setup for Sybase, target database environment setup for Azure SQL, migration steps, time consumed and key learnings for SSMA and SSIS, the remaining sections are covered in Part 2 here :- &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/data-migration-strategies-for-large-scale-sybase-to-sql-migrations-using-ssma-ss/4471308" target="_blank" rel="noopener" data-lia-auto-title="Part 2" data-lia-auto-title-active="0"&gt;Part 2&lt;/A&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974329"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Objectives of the Evaluation&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI class="lia-align-justify"&gt;Assess the migration duration for various database sizes.&amp;nbsp;&lt;/LI&gt;
&lt;LI class="lia-align-justify"&gt;Evaluate the capabilities and limitations of each tool.&lt;/LI&gt;
&lt;LI&gt;Identify optimal patterns for bulk data movement.&lt;/LI&gt;
&lt;LI&gt;Recommend the best tool or parameter combination for various scenarios which gives the best results for different sample datasets.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974330"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Overview of Tools&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG class="lia-align-justify"&gt;SQL Server Migration Assistant (SSMA) for SAP ASE&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Microsoft SQL Server Migration Assistant (SSMA) for Sybase Adaptive Server Enterprise (ASE) is a tool for migrating SAP ASE databases to SQL Server 2012 (11.x) through SQL Server 2022 (16.x) on Windows and Linux, Azure SQL Database or Azure SQL Managed Instance.  It supports schema conversion, data migration, and limited post-migration testing. SSMA converts Sybase ASE database objects (tables, views, stored procedures, etc.) to Azure SQL-compatible formats and migrates data using a client-side or server-side engine.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;SQL Server Integration Services (SSIS)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;SQL Server Integration Services is a platform for building enterprise-level data integration and data transformations solutions. It offers flexibility in handling non-compatible objects, custom transformations, and large-scale data transfers through its pipeline architecture. SSIS is particularly useful when SSMA cannot migrate certain objects or when additional data transformation is required like Unparsed SQL, SET option conversion issues, Identifier conversion and issues, date format conversion and with NON-ANSI joins&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Azure Data Factory (ADF)&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;ADF is a fully managed, serverless, cloud-based data integration service for orchestrating and automating data movement and transformation across on-premises and cloud environments. It is well-suited for hybrid migrations, large-scale data pipelines, and integration with Azure services like Azure SQL. ADF excels in scenarios requiring scalability and parallel processing.&amp;nbsp;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974331"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Environment Setup&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;For testing, we used the following setup:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Source Environment&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Sybase ASE Version&lt;/STRONG&gt;: 16.0 SP03&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;OS&lt;/STRONG&gt;: SUSE Linux Enterprise Server 15 SP6&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VM Size&lt;/STRONG&gt;: Standard B12ms (12 vcpus, 48 GiB memory)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Target Environment&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;SQL Server 2022:&lt;/STRONG&gt; hosted on Azure VM&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;OS&lt;/STRONG&gt;: Windows Server 2022&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;VM Size&lt;/STRONG&gt;: Standard D16as v4 (16 vcpus, 64 GiB memory)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Network&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Both VMs hosted in &lt;STRONG&gt;same Azure region&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Premium SSD LRS disks&lt;/STRONG&gt; for source and target&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Metrics&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;We evaluated the tools based on:&lt;/P&gt;
&lt;OL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Data Migration Time&lt;/STRONG&gt;: Time to migrate 1 GB, 10 GB, 100 GB (1 x 100 GB table) and 500 GB (5 x 100 GB tables) of data.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Scalability&lt;/STRONG&gt;: Performance with increased data volumes (up to 500 GB).&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;SQL Server Migration Assistant (SSMA)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Migration steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Install SSMA for SAP ASE 10.3 or above and required drivers (Sybase ASE ODBC/ADO.NET providers).&lt;/LI&gt;
&lt;LI&gt;Create an SSMA project, configure source (Sybase ASE) and target (Azure SQL) connections.&lt;/LI&gt;
&lt;LI&gt;Assess database compatibility, customize data type mappings (e.g., Sybase TEXT to SQL Server NVARCHAR(MAX)), and convert schema.&lt;/LI&gt;
&lt;LI&gt;Migrate data using SSMA’s client-side or server-side engine.&lt;/LI&gt;
&lt;LI&gt;Validate migrated objects and data.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Test Results:&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-align-center lia-border-style-solid" border="1" style="width: 100%; height: 280.666px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 39.3333px;"&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Test Scenario&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Data Size&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Time Taken&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Throughput&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Threads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Notes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 39.3333px;"&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;28 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~36 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;10 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4 minutes 36 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~37 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;100 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;44 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~38 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;40&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads&amp;nbsp;- Using project level setting changes, like Migration engine, batch size, parallel data migration mode and multi loading&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Scalability Test&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;500 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;3 hours 44 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~38 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;40&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads&amp;nbsp;-&amp;nbsp;&amp;nbsp;Same&amp;nbsp;performance as of&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;sequential nature of SSMA’s processing engine,&amp;nbsp;which limits parallelism&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Key learnings:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Excellent&lt;/STRONG&gt; for schema conversion and small data sets&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Not scalable&lt;/STRONG&gt; beyond 100 GB, memory issues and slow single-threaded loads&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Limited configuration&lt;/STRONG&gt; for tuning bulk inserts&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Performance Tuning Insights:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Migration Engine: &lt;/STRONG&gt;Configured server-side data migration to optimize performance and reduce processing overhead. Client-side data migration refers to SSMA client retrieving the data from the source and bulk inserting that data into Azure SQL. Server-side data migration refers to SSMA data migration engine (bulk copy program) running on the Azure SQL box as a SQL Agent job retrieving data from the source and inserting directly into Azure SQL thus avoiding an extra client-hop (better performance). When choosing this method, you will need to specify which version of the BCP is intended to use (32bit or 64bit):&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Batch Size: &lt;/STRONG&gt;Data is migrated in batches from the source tables into Azure SQL tables within transactions. The batch size option determines how many rows are loaded into Azure SQL per transaction. By default, it is 10,000, but we increased it to 270,000 since our dataset contained 26,214,400 rows (around 100 GB).&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Parallel Data Migration Mode: &lt;/STRONG&gt;This option is available only when using the Client Side Data Migration Engine mode. It defines the number of parallel threads to be used during migration. By default, it is set to Auto (10 threads). To modify it, select Custom and specify the desired number of parallel threads. We changed this to 40 to get the best results.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Multi loading: &lt;/STRONG&gt;With Multi-Loading enabled, SSMA uses multiple parallel threads to load data batches at the same time, which can significantly speed up migration for large tables. It essentially breaks the source data into chunks (based on your batch size setting) and loads them concurrently.&lt;/P&gt;
&lt;img /&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;SQL Server Integration Services (SSIS)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Migration steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Create an SSIS project in SQL Server Data Tools (SSDT).&lt;/LI&gt;
&lt;LI&gt;Configure ODBC / ADO .NET Source (Sybase ASE) and ADO .NET/OLE DB Destination (Azure SQL) with appropriate drivers.&lt;/LI&gt;
&lt;LI&gt;Build Control flow and data flow tasks for each table, applying transformations for incompatible data types or business logic.&lt;/LI&gt;
&lt;LI&gt;Execute packages in parallel for large tables, optimizing buffer size and commit intervals.&lt;/LI&gt;
&lt;LI&gt;Monitor and log errors for troubleshooting.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Test Results:&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-align-center lia-border-style-solid" border="1" style="width: 100%; height: 280.666px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 39.3333px;"&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Test Scenario&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Data Size&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Time Taken&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Throughput&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Threads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Notes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 39.3333px;"&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;31 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~33 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 39.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;10 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4 minutes 16 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~40 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;100 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;38 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~44 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads&amp;nbsp;– Regulating&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;MaxConcurrentExecutable, DefaultBufferMaxRows, Engine threads and Azure SQL maximum server memory&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 67.3333px;"&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Scalability Test&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;500 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;3 hours 12 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~44 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;5&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67.3333px;"&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads&amp;nbsp;-&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Performance improved with larger buffer sizes (100 MB) and SSD I/O.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Key learnings:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Very fast&lt;/STRONG&gt; for large data volumes with tuning&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Requires development time&lt;/STRONG&gt; (package design, error handling)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Bottleneck&lt;/STRONG&gt;: network throughput on 500 GB run&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Best suited&lt;/STRONG&gt; for 10–500 GB on-prem or IaaS migrations&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Performance Tuning Insights:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;DefaultBufferSize: &lt;/STRONG&gt;The task’s buffer settings can be configured using the DefaultBufferSize property, which defines the buffer size, and the &lt;STRONG&gt;DefaultBufferMaxRows&lt;/STRONG&gt; property, which specifies the maximum number of rows per buffer. By default, the buffer size is 10 MB (with a maximum of 100 MB), and the default maximum number of rows is 10,000.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;DefaultBufferMaxRows: &lt;/STRONG&gt;The data flow engine begins the task of sizing its buffers by calculating the estimated size of a single row of data. It then multiplies the estimated size of a row by the value of &lt;STRONG&gt;DefaultBufferMaxRows&lt;/STRONG&gt; to obtain a preliminary value for the buffer size.&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="lia-align-justify"&gt;&amp;nbsp; If the result is greater than the value of &lt;STRONG&gt;DefaultBufferSize&lt;/STRONG&gt;, the engine reduces the number of rows.&lt;/LI&gt;
&lt;LI class="lia-align-justify"&gt;&amp;nbsp; If the result is less than the minimum buffer size calculated inside the engine increases the number of rows.&lt;/LI&gt;
&lt;LI class="lia-align-justify"&gt;&amp;nbsp; If the result falls between the minimum buffer size and the value of &lt;STRONG&gt;DefaultBufferSize&lt;/STRONG&gt;, the engine sizes the buffer as close as possible to the estimated row size times the value of &lt;STRONG&gt;DefaultBufferMaxRows&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;If sufficient memory is available, it is better to use fewer large buffers instead of many small ones. In other words, performance improves when the number of buffers is minimized, and each buffer holds as many rows as possible.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;IsSorted: &lt;/STRONG&gt;Sorting using a SSIS task is a slow operation by definition. Avoiding unnecessary sorting can enhance the performance of data flow in the package. Set the &lt;STRONG&gt;IsSorted&lt;/STRONG&gt; property of a component in the output data flow upstream to &lt;STRONG&gt;True&lt;/STRONG&gt;.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;ADO .NET Source:&lt;/STRONG&gt; When retrieving data from a view using an OLE DB data source, choose &lt;EM&gt;SQL command&lt;/EM&gt; as the data access mode and provide a SELECT statement. Using a SELECT statement ensures that the view is accessed in the most efficient way.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;OLEDB Destination: &lt;/STRONG&gt;Several OLE DB Destination settings can significantly impact data transfer performance:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Data Access Mode – This setting offers the &lt;EM&gt;Fast Load&lt;/EM&gt; option, which internally uses a BULK INSERT statement to load data into the destination table, instead of executing individual INSERT statements for each row. Unless you have a specific reason to change it, keep the default &lt;EM&gt;Fast Load&lt;/EM&gt; option enabled. When using &lt;EM&gt;Fast Load&lt;/EM&gt;, additional performance-related settings become available (listed below).&lt;/LI&gt;
&lt;LI&gt;Keep Identity – By default, this is unchecked. If the destination table has an identity column, SQL Server generates identity values automatically. Checking this option ensures that identity values from the source are preserved and inserted into the destination.&lt;/LI&gt;
&lt;LI&gt;Keep Nulls – By default, this is unchecked. If a NULL value is encountered in the source and the target column has a default constraint, the default value will be inserted. Enabling this option preserves the NULL values from the source instead of applying the default constraint.&lt;/LI&gt;
&lt;LI&gt;Table Lock – This is checked by default, meaning a table-level lock is acquired during data load instead of multiple row-level locks. This prevents lock escalation issues and generally improves performance. Keep this enabled unless the table is actively being used by other processes at the same time.&lt;/LI&gt;
&lt;LI&gt;Check Constraints – Checked by default. This validates incoming data against the destination table’s constraints. If you are confident the data will not violate constraints, unchecking this option can improve performance by skipping the validation step.&lt;/LI&gt;
&lt;LI&gt;Rows per Batch – The default value is -1, which means all incoming rows are treated as a single batch. You can change this to a positive integer to divide the incoming rows into multiple batches, where the value specifies the maximum number of rows per batch.&lt;/LI&gt;
&lt;LI&gt;Maximum Insert Commit Size – The default value is 2147483647 (the maximum for a 4-byte integer), which commits all rows in a single transaction once the load completes successfully. You can set this to a positive integer to commit data in smaller chunks. While committing more frequently does add overhead to the data flow engine, it helps reduce pressure on the transaction log and tempdb, preventing excessive growth during high-volume data loads.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;DelayValidation: &lt;/STRONG&gt;SSIS uses validation to determine if the package could fail at runtime. SSIS uses two types of validation. First is package validation (early validation) which validates the package and all its components before starting the execution of the package.&amp;nbsp; Second SSIS uses component validation (late validation), which validates the components of the package once started. If you set it to TRUE, early validation will be skipped and the component will be validated only at the component level (late validation) which is during package execution.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;MaxConcurrentExecutables&lt;/STRONG&gt;: It's the property of the SSIS package and specifies the number of executables (different tasks inside the package) that can run in parallel within a package or in other words, the number of threads SSIS runtime engine can create to execute the executables of the package in parallel.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;EngineThreads&lt;/STRONG&gt;: This property specifies the number of source threads (does data pull from source) and worker thread (does transformation and upload into the destination) that can be created by data flow pipeline engine to manage the flow of data and data transformation inside a data flow task, it means if the EngineThreads has value 5 then up to 5 source threads and also up to 5 worker threads can be created. Please note, this property is just a suggestion to the data flow pipeline engine, the pipeline engine may create less or more threads if required.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;IsolationLevel&lt;/STRONG&gt;: In SQL Server Integration Services (SSIS),&amp;nbsp;IsolationLevel&amp;nbsp;is a property that defines how a transaction within a package interacts with other concurrent transactions in the database.&amp;nbsp;It determines the degree to which one transaction is isolated from the effects of other transactions. Default is ‘Serializable’ which Locks the entire data set being read and keeps the lock until the transaction completes. Instead set it to ‘ReadUncommited’ or ‘ReadCommited’. ‘ReadUncommited’ Reads data without waiting for other transactions to finish. Can read rows that are not yet committed (a.k.a. &lt;EM&gt;dirty reads&lt;/EM&gt;). While ‘ReadCommited’ only reads committed rows. Prevents dirty reads by waiting until other transactions commit or roll back. Use ReadUncommited when you need speed and can tolerate inaccurate data else use ReadCommited but it is slower than Read Uncommitted because it must wait for locks to release.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;TransactionOption: &lt;/STRONG&gt;The TransactionOption property in SQL Server Integration Services (SSIS) is used to control how tasks, containers, or the entire package participate in transactions to ensure data integrity. It determines whether a task or container starts a transaction, joins an existing one, or does not participate in any transaction. The property is available at the package level, container level (e.g., For Loop, Foreach Loop, Sequence), and for individual Control Flow tasks (e.g., Execute SQL Task, Data Flow Task). Set this to ‘NotSupported’ by which the task or container does not participate in any transaction, even if a parent container or package has started one. If a transaction exists at a higher level (e.g., package or parent container), the task or container operates outside of it. It is the fastest because there’s no transaction overhead.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Database Recovery Model:&lt;/STRONG&gt; The source database is configured to use the Bulk-logged recovery model, which aims to minimally&amp;nbsp;log bulk operations.&amp;nbsp;This should be significantly more performant than the Full recovery model, assuming the bulk insert meets the criteria required to be minimally logged. The criteria for the target&amp;nbsp;table are as follows –&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;The target table is not being replicated.&lt;/LI&gt;
&lt;LI&gt;Table locking is specified (using TABLOCK).&lt;/LI&gt;
&lt;LI&gt;If the table has no indexes, data pages are minimally logged.&lt;/LI&gt;
&lt;LI&gt;If the table does not have a clustered index but has one or more non-clustered indexes, data pages are always minimally logged. How index pages are logged, however, depends on whether the table is empty –&lt;/LI&gt;
&lt;LI&gt;If the table is empty, index pages are minimally logged.&lt;/LI&gt;
&lt;LI&gt;If table is non-empty, index pages are fully logged.&lt;/LI&gt;
&lt;LI&gt;If the table has a clustered index and is empty, both data and index pages are minimally logged. In contrast, if a table has a clustered index and is non-empty, data pages and index pages are both fully logged regardless of the recovery model.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Pulling High Volumes of Data: &lt;/STRONG&gt;To enhance the speed and efficiency of the data migration process, we devised an optimized approach that involves transforming the target table into a heap by dropping all its indexes at the outset, thereby eliminating the overhead of index maintenance during data insertion. Following this, we transfer the data to the heap table, which is significantly faster due to the absence of indexes. Once the data transfer is complete, we recreate the indexes on the target table to restore its original structure and optimize query performance. This streamlined method of dropping indexes, transferring data to a heap, and then recreating indexes substantially accelerates the overall migration process compared to maintaining indexes throughout.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;Feedback and suggestions&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;We hope this post has helped you configure your migration solution and choose the right options to successfully migrate your databases.&amp;nbsp;The remaining steps are covered in Part 2 here :-&amp;nbsp;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/data-migration-strategies-for-large-scale-sybase-to-sql-migrations-using-ssma-ss/4471308" data-lia-auto-title="Part 2" data-lia-auto-title-active="0" target="_blank"&gt;&lt;STRONG&gt;Part 2&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A class="lia-external-url" href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 16:33:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/data-migration-strategies-for-large-scale-sybase-to-sql/ba-p/4455711</guid>
      <dc:creator>Ankur_Sinha</dc:creator>
      <dc:date>2025-12-12T16:33:49Z</dc:date>
    </item>
    <item>
      <title>Data Migration Strategies for Large-Scale Sybase to SQL Migrations Using SSMA, SSIS and ADF- Part 2</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/data-migration-strategies-for-large-scale-sybase-to-sql/ba-p/4471308</link>
      <description>&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;In today’s data-driven landscape, the migration of databases is a crucial task that requires meticulous planning and execution. Our recent project for migrating data from Sybase ASE to MSSQL set out to illuminate this process, using tools like Microsoft SQL Server Migration Assistant (SSMA),&amp;nbsp;Microsoft SQL Server&amp;nbsp;Integration Service (SSIS)&amp;nbsp;packages, and Azure Data Factory&amp;nbsp;(ADF).&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;We carefully designed our tests to cover a range of database sizes: 1GB, 10GB, 100GB, and 500GB. Each size brought its own set of challenges and valuable insights, guiding our migration strategies for a smooth transition.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For the smaller databases (1GB and 10GB), we utilized SSMA, which&amp;nbsp;demonstrated&amp;nbsp;strong efficiency and reliability in handling straightforward migrations. SSMA was particularly effective in converting database schemas and moving data with minimal complication. As we&amp;nbsp;scaled&amp;nbsp;up&amp;nbsp;larger datasets (100GB and 500GB), we incorporated SSIS packages alongside Azure Data Factory to address the increased complexity and ensure robust performance throughout the migration process.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Our exercises yielded important findings related to performance metrics such as data throughput, error rates during transfer, and overall execution times for each migration approach. These insights helped us refine our methodologies and underscored the necessity of selecting the right tools for each migration scenario when transitioning from Sybase to MSSQL.&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Ultimately, our experience highlighted that thorough analysis is essential for identifying potential bottlenecks and optimizing workflows, enabling successful and efficient migrations across databases of all sizes. The results reassure stakeholders that, with a well-considered approach and comprehensive testing, migrations can be executed seamlessly while maintaining the integrity of all datasets involved.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;There are 2 parts in the solution. &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/data-migration-strategies-for-large-scale-sybase-to-sql-migrations-using-ssma-ss/4455711" data-lia-auto-title="Part 1" data-lia-auto-title-active="0" target="_blank"&gt;&lt;STRONG&gt;Part 1&lt;/STRONG&gt;&lt;/A&gt; covers SSMA and SSIS. Part 2 covers the Azure Data Factory (ADF), tests results, performance improvement guidelines and conclusion.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;Azure Data Factory (ADF)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Migration steps:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Set up an Azure Data Factory instance and Integration Runtime (IR) for on-premises connectivity.&lt;/LI&gt;
&lt;LI&gt;Create pipelines with Copy data activities, mapping Sybase ASE tables to SQL Server/Azure SQL.&lt;/LI&gt;
&lt;LI&gt;Configure parallel copy settings and staging storage (Azure Blob Storage) for large datasets (if required).&lt;/LI&gt;
&lt;LI&gt;Monitor pipeline execution and retry failed activities.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Test Results:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-align-center lia-border-style-solid" border="1" style="border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Test Scenario&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Data Size&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Time Taken&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Throughput&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Threads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Notes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;40 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~25 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Single Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;10 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;4 minutes 22 seconds&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~40 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;1&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;&amp;nbsp;Single Threaded&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Copy&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;100 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;39 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~44 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;16&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Scalability Test&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;500 GB&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;3 hours 10 minutes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;~45 Mbps&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;16&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;SPAN data-contrast="none"&gt;Parallel Threads - ADF scaled better due to cloud elasticity&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;201341983&amp;quot;:0,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:160,&amp;quot;335559740&amp;quot;:276}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Key learnings:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Cloud-native, scalable&lt;/STRONG&gt;: handled 500 GB with ease&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Parallel copy&lt;/STRONG&gt; and &lt;STRONG&gt;batch tuning&lt;/STRONG&gt; make it faster as volume increases&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Auto-scaling&lt;/STRONG&gt; prevents many typical OOM (Out of Memory) errors seen with SSMA&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Recommended&lt;/STRONG&gt; for cloud targets (Azure SQL MI, Azure SQL DB)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Performance Tuning Insights:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In our setup we added a lookup, a foreach and one copy data within our pipeline.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Lookup: &lt;/STRONG&gt;In the Settings tab, select the SQL dataset as the source. Instead of choosing the entire table, go to the “Use query” section and provide the query shown below. This query uses a recursive Common Table Expression (CTE) to dynamically generate partition ranges across a large integer sequence (representing the total number of rows).&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;Here is a sample output of this query for top 20 rows–&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;ForEach: &lt;/STRONG&gt;Use this query in the Pipeline expression builder which is a reference to the Lookup output in our pipeline.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt; &lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;Copydata: &lt;/STRONG&gt;Use below query at source which allows you to dynamically query different partitions of a large table (like Table_1_100GB) in parallel or sequentially.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;On Sink, change “write batch size” and “Max concurrent connections as per the rowset and assigned memory. Optimizing these values can improve performance and reduce overhead during data transfers. For small rows, increase writeBatchSize to reduce batch overhead and improve throughput. For large rows, use a smaller value to avoid memory or database overload. If the source data exceeds the specified batch size, ADF processes data in multiple batches automatically.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Additionally adjust “Maximum data integration unit” and “Degree of copy parallelism”. A Data Integration Unit (DIU) is a measure that represents the power of a single unit in Azure Data Factory and Synapse pipelines. Power is a combination of CPU, memory, and network resource allocation. DIU only applies to Azure integration runtime. DIU doesn't apply to self-hosted integration runtime. While Degree of copy parallelism determines how many parallel copy activities can run simultaneously, optimizing the data transfer process.&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;SPAN class="lia-text-color-10"&gt;Cumulative Tests Result&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;U&gt;Disclaimer&lt;/U&gt;&lt;/EM&gt;:&lt;/STRONG&gt; &lt;EM&gt;All test results published herein are provided solely for reference purposes and reflect performance under ideal conditions within our controlled environment. Actual performance in the user's environment may vary significantly due to factors including, but not limited to, network speed, system bottlenecks, hardware limitations, CPU cores, memory, disk I/O, firewall configurations, and other environmental variables. On the source and target databases also multiple&amp;nbsp;&lt;/EM&gt;&lt;EM&gt;performance optimization and configuration adjustments have been implemented&lt;/EM&gt;&lt;EM&gt; to enhance the migration efficiency. We strongly recommend that users conduct their own testing to determine performance under their specific conditions.&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-justify"&gt;&lt;table class="lia-border-style-ridge" border="1" style="width: 1010px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Data Size&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Best Performer&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Observation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;1 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P class="lia-align-left"&gt;All tools performed efficiently&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Minimal overhead; no significant performance difference.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;10 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;SSIS&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Optimized batch processing led to better performance.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;100 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;SSIS and ADF&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Both benefited from parallelism.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;500 GB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ADF and SSIS&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ADF's scalability and retry mechanisms proved valuable. SSIS was equivalent with tuned data flow components.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-justify"&gt;&lt;table class="lia-border-style-ridge" border="1" style="width: 1010px; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG class="lia-align-center"&gt;Data Volume&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Row Count&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;SSMA Time / Speed&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;SSIS Time / Speed&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;ADF Time / Speed&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;1 GB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;262,144&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;28 sec / ⚡ &lt;STRONG&gt;36 MB/s&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;31 sec / ⚡33 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;40 sec / ⚡25 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;10 GB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;2,621,440&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;4 min 36 sec / ⚡37 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;4 min 16 sec / ⚡&lt;STRONG&gt;40 MB/s&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;4 min 22 sec / ⚡40 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;100 GB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;26,214,400&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;44 min / ⚡38 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;38 min / ⚡&lt;STRONG&gt;44 MB/s&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;39 min / ⚡44 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;500 GB&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;26,214,400 x 5&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;3 hr 44 min / ⚡38 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;3 hr 12 min / ⚡44 MB/s&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;3 hr 10 min / ⚡&lt;STRONG&gt;45 MB/s&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Performance Improvement Guidelines&lt;/SPAN&gt;&lt;/H2&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Use SSMA for Schema Conversion&lt;/STRONG&gt;: SSMA is the primary tool for automating schema conversion. Customize data type mappings (e.g., Sybase DATETIME to SQL Server DATETIME2) and handle case-sensitive databases carefully.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Leverage SSIS for Complex Data Transformations&lt;/STRONG&gt;: Use SSIS for tables with non-compatible data types or when business logic requires transformation. Optimize performance with parallel tasks and appropriate buffer settings.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Use ADF for Large-Scale or Hybrid Migrations&lt;/STRONG&gt;: ADF is ideal for large datasets or migrations to Azure SQL Database. Use staging storage and parallel copy to maximize throughput. Ensure stable network connectivity for on-premises to cloud transfers.
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Tips to improve ADF performance&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;:&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use &lt;STRONG&gt;staging areas&lt;/STRONG&gt; (e.g., Azure Blob Storage) to offload source systems and speed up data transfers.&lt;/LI&gt;
&lt;LI&gt;Enable &lt;STRONG&gt;parallel copy&lt;/STRONG&gt; in Copy Activity to increase throughput.&lt;/LI&gt;
&lt;LI&gt;Use the &lt;STRONG&gt;Integration Runtime&lt;/STRONG&gt; closest to your data source to reduce network latency.&lt;/LI&gt;
&lt;LI&gt;Enable &lt;STRONG&gt;data partitioning&lt;/STRONG&gt; on large tables to parallelize read/write operations.&lt;/LI&gt;
&lt;LI&gt;Adjust &lt;STRONG&gt;degree of parallelism&lt;/STRONG&gt; to match your compute capacity.&lt;/LI&gt;
&lt;LI&gt;Use &lt;STRONG&gt;Self-Hosted IR&lt;/STRONG&gt; or &lt;STRONG&gt;Azure IR with higher compute&lt;/STRONG&gt; for large or complex migrations.&lt;/LI&gt;
&lt;LI&gt;Enable &lt;STRONG&gt;Auto-Scaling&lt;/STRONG&gt; where supported to handle spikes efficiently.&lt;/LI&gt;
&lt;LI&gt;Monitor IR utilization to avoid under-provisioning or over-provisioning.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please refer below links for more details related to ADF –&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://aka.ms/SybaseASETransferTableDataCopyToSQL" target="_blank" rel="noopener"&gt;Sybase ASE to Azure SQL full and incremental data copy using ASE Transfer Table Tool and ADF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-performance-features#parallel-copy" target="_blank" rel="noopener"&gt;Copy activity performance optimization features - Azure Data Factory &amp;amp; Azure Synapse | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/db2-to-azure-sql-db-parallel-data-copy-by-generating-adf-copy-activities-dynamic/3605541" target="_blank" rel="noopener" data-lia-auto-title="Db2 to Azure SQL fast data copy using ADF" data-lia-auto-title-active="0"&gt;Db2 to Azure SQL fast data copy using ADF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Migration Readiness Testing:&lt;/STRONG&gt; Conduct performance testing on production-scale environments prior to the actual migration to obtain an accurate baseline of system behavior and identify potential bottlenecks under real workload conditions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Hybrid Approach:&lt;/STRONG&gt; Combine SSMA for schema conversion, SSIS for complex data migrations, and ADF for orchestration in large-scale or cloud-based scenarios. For example, use SSMA to convert schemas, SSIS to migrate problematic tables, and ADF to orchestrate the overall pipeline.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Validation&lt;/STRONG&gt;: Post-migration, validate data integrity using checksums or row counts and test stored procedures for functional equivalence. Use SQL Server Management Studio (SSMS) for debugging. In the end we can use Microsoft Database Compare Utility that allows comparison of multiple source and target databases.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974336"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Challenges and Mitigations&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;&lt;STRONG&gt;Sybase-Specific Syntax&lt;/STRONG&gt;: SSMA may fail to convert complex stored procedures with Sybase-specific T-SQL. Manually rewrite these using SQL Server T-SQL.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;LOB Data&lt;/STRONG&gt;: Large Object (LOB) data types (e.g., TEXT, IMAGE) may cause truncation errors. Map to NVARCHAR(MAX) or VARBINARY(MAX) and validate data post-migration.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Network Latency in ADF&lt;/STRONG&gt;: For on-premises to Azure migrations, ensure high-bandwidth connectivity or use Azure ExpressRoute to minimize latency.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Case Sensitivity&lt;/STRONG&gt;: Sybase ASE databases may be case-sensitive, while SQL Server defaults to case-insensitive. Configure SQL Server collations (e.g. SQL_Latin1_General_CP1_CS_AS) to match source behavior.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974337"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Conclusion&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;SSMA, SSIS, and ADF each offer unique strengths for migrating Sybase ASE to SQL Server, Azure SQL Database &lt;SPAN data-contrast="auto"&gt;or Azure SQL Managed Instance&lt;/SPAN&gt;. SSMA excels in schema conversion, SSIS in complex data transformations, and ADF in scalability and cloud integration. A hybrid approach, leveraging SSMA for schema conversion, SSIS for problematic data, and ADF for orchestration, often yields the best results. Evaluation shows ADF’s superior scalability for large datasets, while SSIS provides flexibility for complex migrations.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Proper planning, including schema assessment, data type mapping, and performance tuning, is critical for a successful migration. For further details refer to Microsoft’s official documentation:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;SSMA for Sybase: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;SQL Server Migration Assistant for Sybase (SybaseToSQL) - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/ssma/sybase/project-settings-migration-sybasetosql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;&lt;SPAN class="lia-text-color-21"&gt;SSMA Project Settings:&lt;/SPAN&gt;&lt;/A&gt; &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/ssma/sybase/project-settings-migration-sybasetosql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;Project Settings (Migration) (SybaseToSQL) - SQL Server | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;SSIS: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver17" target="_blank" rel="noopener"&gt;SQL Server Integration Services - SQL Server Integration Services (SSIS) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;ADF: &lt;A class="lia-external-url" href="https://azure.microsoft.com/en-in/products/data-factory" target="_blank" rel="noopener"&gt;Azure Data Factory - Data Integration Service | Microsoft Azure&lt;/A&gt;&lt;/P&gt;
&lt;H2 class="lia-align-justify"&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205974338"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Feedback and suggestions&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A class="lia-external-url" href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 16:29:05 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/data-migration-strategies-for-large-scale-sybase-to-sql/ba-p/4471308</guid>
      <dc:creator>Ankur_Sinha</dc:creator>
      <dc:date>2025-12-12T16:29:05Z</dc:date>
    </item>
    <item>
      <title>Faster Data Copy between Source and target for partitioned table using Partition Switch in ADF</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/faster-data-copy-between-source-and-target-for-partitioned-table/ba-p/4456585</link>
      <description>&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650893"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;This blog post presents a comprehensive Azure Data Factory (ADF) solution for automating the migration of partitioned tables from IBM Db2 z/OS to Azure SQL Database. The solution consists of two main components: a Partition Discovery &amp;amp; Preparation Pipeline and a Parallel Copy Pipeline with partition switching capabilities. This approach significantly reduces migration time and ensures data integrity while maintaining partition structure in the target environment. As we use one separate table per partition for data copy, it divides the workload into multiple different tables hence reducing the complexity in data migration. &amp;nbsp;For few partitioned/non-partitioned tables, there is &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/db2-to-azure-sql-db-parallel-data-copy-by-generating-adf-copy-activities-dynamic/3605541" target="_blank" rel="noopener" data-lia-auto-title="another approach" data-lia-auto-title-active="0"&gt;another approach&lt;/A&gt; where parallel copy activities can be implemented.&lt;/P&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650894"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Challenges in migrating of large, partitioned tables&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;Migrating large, partitioned tables from IBM Db2 z/OS to Azure SQL Database presents unique challenges:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Complex Partition Structures: Db2 z/OS supports various partitioning schemes (range, hash, list) that need to be properly mapped&lt;/LI&gt;
&lt;LI&gt;Large Data Volumes: Enterprise tables can contain billions of rows across hundreds of partitions&lt;/LI&gt;
&lt;LI&gt;Minimal Downtime Requirements: Business-critical applications require near-zero downtime migrations&lt;/LI&gt;
&lt;LI&gt;Data Integrity: Ensuring consistency across all partitions during migration&lt;/LI&gt;
&lt;LI&gt;Performance Optimization: Maximizing throughput while managing resource consumption&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This approach addresses most of the challenges above in an intelligent, automated approach:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Automatically discovers partition metadata from Db2 system tables&lt;/LI&gt;
&lt;LI&gt;Creates optimized migration plans based on partition characteristics&lt;/LI&gt;
&lt;LI&gt;Create one temporary persistent table per partition(Which can be dropped after the process is complete) with the same partition function and scheme to help in migration(&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/relational-databases/tables/create-check-constraints?view=sql-server-ver17" target="_blank" rel="noopener"&gt;Check constraint&lt;/A&gt; is another way to implement this, however for the sake of simplicity, we are cloning the base table)&lt;/LI&gt;
&lt;LI&gt;Executes parallel data transfer with partition-level granularity&lt;/LI&gt;
&lt;LI&gt;Implements &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver17#c-switch-partitions-between-tables" target="_blank" rel="noopener"&gt;partition switching&lt;/A&gt; for minimal downtime&lt;/LI&gt;
&lt;LI&gt;Provides comprehensive monitoring and error handling&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-10"&gt;Solution Overview&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;The solution has two phases both implemented using an automated ADF Pipeline.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Phase 1: Discovery &amp;amp; Preparation&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Using a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/connector-db2?tabs=data-factory" target="_blank" rel="noopener"&gt;Db2 copy activity&lt;/A&gt;, extract necessary data from Db2 System tables.&lt;/LI&gt;
&lt;LI&gt;Extracts partition metadata (boundaries, row counts, sizes)&lt;/LI&gt;
&lt;LI&gt;Creates migration control tables in Azure SQL Database&lt;/LI&gt;
&lt;LI&gt;Generates optimized copy strategies per partition&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Phase 2: Copy and Partition switch&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Executes partition-level data transfers in parallel&lt;/LI&gt;
&lt;LI&gt;Implements partition switching for seamless integration&lt;/LI&gt;
&lt;LI&gt;Provides real-time status tracking and error handling&lt;/LI&gt;
&lt;LI&gt;Supports restart capabilities for failed partitions&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650895"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Prerequisites&lt;/SPAN&gt;&lt;/H1&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/introduction" target="_blank" rel="noopener"&gt;Azure data factory&lt;/A&gt; instance&lt;/LI&gt;
&lt;LI&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory" target="_blank" rel="noopener"&gt;Self-Hosted Integration Runtime (SHIR)&lt;/A&gt; with Db2 connectivity&lt;/LI&gt;
&lt;LI&gt;Access to System tables on Db2.&lt;/LI&gt;
&lt;LI&gt;Schema of Db2 Source Partitioned table should be migrated to SQL(SQL table will be used to clone temporary persistent tables per partition)&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650896"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;ADF Pipeline Phase 1: Partition Discovery &amp;amp; Preparation Pipeline&lt;/SPAN&gt;&lt;/H1&gt;
&lt;img&gt;&lt;STRONG&gt;Fig 1.1 Partition Preparation Pipeline&lt;/STRONG&gt;&lt;/img&gt;
&lt;P&gt;The diagram above illustrates the initial phase of the solution, which sets up the necessary tables and extracts essential information from Db2 into Azure SQL. Here’s a breakdown of each step:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;The first script establishes a control table in SQL called Db2_PARTITION_STATUS.&lt;/LI&gt;
&lt;LI&gt;The second script adds indexes to Db2_PARTITION_STATUS to improve access speed.&lt;/LI&gt;
&lt;LI&gt;The third step uses copy activities to retrieve partition details from Db2 system tables and populate the Db2_PARTITION_STATUS table.&lt;/LI&gt;
&lt;LI&gt;The fourth step creates a stored procedure named ClonePartitionedTable, which automates the cloning of SQL tables—one for each partition—making it easier to handle hundreds of partitions.&lt;/LI&gt;
&lt;LI&gt;The fifth step fetches relevant rows from Db2_PARTITION_STATUS. Users can decide which tables to migrate by updating the MIGRATE column in this table.&lt;/LI&gt;
&lt;LI&gt;The final step generates a clone of the source table for each partition, supporting the overall migration process.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650897"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;ADF Pipeline Phase 2: Data Copy &amp;amp; Partition Switch&lt;/SPAN&gt;&lt;/H1&gt;
&lt;img&gt;&lt;STRONG&gt;Fig 1.2 Parallel copy and Switch pipeline&lt;/STRONG&gt;&lt;/img&gt;
&lt;P&gt;The next phase involves the actual data copy between partitions&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;First step gets the table and partition information for the rows which need to be migrated using a lookup activity in ADF.&lt;/LI&gt;
&lt;LI&gt;For each row that is retrieved from step1 above, there are a series of activities which are executed in parallel. Details of it are provided below.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img&gt;&lt;STRONG&gt;Fig 1.3 ForEach Activity Details&lt;/STRONG&gt;&lt;/img&gt;
&lt;P&gt;As there are multiple rows which come into the foreach activity, each instance of foreach represents on partition of the source table. So the activities highlighted below will be performed for each and every partition in parallel that has been marked for migration.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;First we update the Status table that the data copy has started for this particular partition using a script activity.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Next is a copy activity, which copies the particular partition data information from source to the same partition on a cloned SQL table.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;If the Copy activity was successfully, the status table is updated with a status as Start Switching.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;If the copy activity fails, it updates the status as Copy Failed. This helps in tracking which partition data was copied and which failed. It also helps in restarting copy of failed partitions.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Then there is a script activity which actually does the partition switch using an Alter table command.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt; &lt;/STRONG&gt;Again the status of the partition switch is updated as success/Failure in the status table, helping in exactly tracking which partitions were successful and which were not.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650898"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Conclusion&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;This comprehensive Azure Data Factory solution provides a robust, scalable approach to migrating partitioned tables from IBM Db2 z/OS to Azure SQL Database. However it is not restricted to Db2 as source. The same architecture can be used for other partitioned sources with slight modifications. The two-pipeline architecture ensures:&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Key Benefits Achieved&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Automated Discovery: Eliminates manual partition mapping and reduces human error&lt;/LI&gt;
&lt;LI&gt;Intelligent Optimization: Applies data-driven strategies for optimal performance&lt;/LI&gt;
&lt;LI&gt;Parallel Processing: Maximizes throughput through partition-level parallelism&lt;/LI&gt;
&lt;LI&gt;Minimal Downtime: Uses partition switching for near-instantaneous data integration&lt;/LI&gt;
&lt;LI&gt;Comprehensive Monitoring: Provides real-time visibility into migration progress&lt;/LI&gt;
&lt;LI&gt;Error Resilience: Isolates failures and enables selective retry operations&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Business Impact&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Reduced Migration Time: From weeks to days through automation and parallelization&lt;/LI&gt;
&lt;LI&gt;Lower Risk: Comprehensive testing and rollback capabilities&lt;/LI&gt;
&lt;LI&gt;Cost Efficiency: Optimal resource utilization and reduced manual effort&lt;/LI&gt;
&lt;LI&gt;Maintainability: Standardized approach applicable across multiple migration projects&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Future Enhancements&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The solution foundation supports several potential enhancements:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Machine Learning Integration: Predictive optimization of batch sizes and thread counts&lt;/LI&gt;
&lt;LI&gt;Advanced Monitoring: Integration with Azure Monitor and custom dashboards&lt;/LI&gt;
&lt;LI&gt;Cross-Platform Support: Extension to other database platforms beyond Db2&lt;/LI&gt;
&lt;LI&gt;Automated Testing: Built-in data validation and integrity checking&lt;/LI&gt;
&lt;LI&gt;Cloud-Native Optimization: Leverage Azure SQL Database elastic pools and serverless compute&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This solution represents a best-practice approach to enterprise data migration, combining the power of Azure Data Factory with intelligent partition management strategies. Organizations implementing this solution can expect significant reductions in migration time, risk, and cost while achieving reliable, repeatable results across their data migration initiatives.&lt;/P&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc207650899"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Feedback and suggestions&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please send an email to&amp;nbsp;&lt;A class="lia-external-url" href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;Database Platform Engineering Team&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Dec 2025 01:08:02 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/faster-data-copy-between-source-and-target-for-partitioned-table/ba-p/4456585</guid>
      <dc:creator>Ramanath_Nayak</dc:creator>
      <dc:date>2025-12-05T01:08:02Z</dc:date>
    </item>
    <item>
      <title>AI-Powered Db2 LUW to Azure Database for PostgreSQL Schema Converter</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/ai-powered-db2-luw-to-azure-database-for-postgresql-schema/ba-p/4458436</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;This blog introduces a custom-built tool designed to migrate database objects from IBM Db2 LUW (Linux, Unix, Windows) to Azure Database for PostgreSQL. While SSMA for Db2 supports migrations to SQL Server, this tool specifically assists with migration to Azure Database for PostgreSQL, helping customers transition smoothly to the Azure ecosystem.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Db2 LUW to Azure database for PostgreSQL&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;As organizations work to migrate Db2 systems, tools that assist with database migrations are increasingly relevant. The &lt;STRONG&gt;Db2 LUW to Azure PostgreSQL Database Converter&lt;/STRONG&gt; is designed to facilitate the migration of IBM Db2 LUW (Linux, Unix, Windows) databases to Azure database for PostgreSQL. Created by the Microsoft Azure SQL CSE/Ninja Team, this tool provides an interface intended to support migration workflows.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;With the appropriate drivers and prerequisites, the tool currently supports the conversion of Db2 LUW to Azure database for PostgreSQL. Future versions of this may include enhancements to existing object conversion and support for other Db2 platforms like z/OS and Db2 for i (AS400).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;System Requirements&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;▪&amp;nbsp;&amp;nbsp;&amp;nbsp; Supports Db2 on LUW version 9.8, 10.1, and later versions&lt;/P&gt;
&lt;P&gt;▪&amp;nbsp;&amp;nbsp;&amp;nbsp; Windows 10, Windows 11&lt;/P&gt;
&lt;P&gt;▪&amp;nbsp;&amp;nbsp;&amp;nbsp; .NET 8.0 Desktop Runtime&lt;/P&gt;
&lt;P&gt;▪&amp;nbsp;&amp;nbsp;&amp;nbsp; Microsoft OLEDB Provider for DB2 is required to access the IBM Db2 Databases.&lt;/P&gt;
&lt;P&gt;▪&amp;nbsp;&amp;nbsp;&amp;nbsp; Azure OpenAI models (For converting Triggers, Functions and Stored Procedures).&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Database Object Conversion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The Db2 LUW to Azure PostgreSQL Database Converter streamlines enterprise-scale database migrations with a segmented workflow. Users are self-guided through connection setup, schema selection and object conversion via intuitive tabs and input fields. With Azure OpenAI integration Db2 LUW triggers, functions and procedures are converted seamlessly. The tool also features built-in conversion log and statistics, a robust design for reliable use by engineers and architects during database migrations.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-justify"&gt;The tool effectively extracts metadata from the source Db2 database and transforms the relevant objects into PostgreSQL formats. All Db2 metadata is retained locally, enabling efficient reuse during subsequent offline conversion process. Error and warning notifications are issued as appropriate, with detailed error log files generated to record any issues arising throughout execution and conversion. Furthermore, a comprehensive telemetry report detailing object conversions is provided upon completion of the process.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;SPAN data-teams="true"&gt;Looking to migrate your Db2 LUW workloads to Azure Database for PostgreSQL? Contact the Azure SQL Engineering Team at &lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt; to get started with the tool and user guide.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;If you have feedback or suggestions for improving this data migration asset, please contact the &lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;Database Platform Engineering Team&lt;/A&gt;. Thanks for your support!&lt;EM&gt; &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-align-justify"&gt;Note: Db2 editions on z/OS, iSeries, and Linux/UNIX/Windows (LUW) differ in their subsystems, databases, and object definitions/functions. This tool currently supports only Db2 LUW; Db2 z/OS and Db2 iSeries are not yet supported.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM class="lia-align-justify"&gt;Future Road Map: &amp;nbsp;&lt;/EM&gt;&lt;EM&gt;Multiple significant enhancements are scheduled for the current tool, including expanded support for Db2 for z/OS and Db2 for iSeries. Aimed at strengthening compatibility, increasing performance, and introducing additional features to facilitate enterprise-level Db2 database migrations to Azure PostgreSQL.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 12 Nov 2025 14:56:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/ai-powered-db2-luw-to-azure-database-for-postgresql-schema/ba-p/4458436</guid>
      <dc:creator>naruldoss</dc:creator>
      <dc:date>2025-11-12T14:56:30Z</dc:date>
    </item>
    <item>
      <title>Enforcing SQL PaaS backup retention with Azure Policy</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/enforcing-sql-paas-backup-retention-with-azure-policy/ba-p/4443657</link>
      <description>&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816612"&gt;&lt;/A&gt;Implementation for SQL DB PITR using the portal&lt;/H2&gt;
&lt;P&gt;Azure policy covers much more than SQL; here we are using a small portion of its capabilities. The bits we are using are&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;A policy definition, on what the policy checks for, and what to do about issues&lt;/LI&gt;
&lt;LI&gt;A policy assignment, with the scope to check the definition across, and parameter values&lt;/LI&gt;
&lt;LI&gt;A remediation task, that makes the required changes&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The requirement in this example is to ensure that all Azure SQL Databases have a short-term (PITR) backup retention of at least 9 days.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Any database created without specifying the retention period will have this added&lt;/LI&gt;
&lt;LI&gt;Any update made with a shorter period will have that modified to be 9 days&lt;/LI&gt;
&lt;LI&gt;Modifications or database creation that explicitly set the retention period to more than 9 days will have that value honoured&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;All these are built under “Policy” in the portal&lt;/P&gt;
&lt;img /&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816613"&gt;&lt;/A&gt;The definition&lt;/H3&gt;
&lt;P&gt;Open Policy | Authoring | Definitions, and on that blade, use the “+ Policy definition” to create a new one&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Definition Location:&lt;/STRONG&gt; the subscription to hold this (there’s a pull-down menu of valid items)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Name:&lt;/STRONG&gt; for example, “Enforce SQL DB PITR”&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Category:&lt;/STRONG&gt; for example “Backup”&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Role Definitions:&lt;/STRONG&gt; “Contributor” for this example, but in general this should be the minimum needed for the updates that the definition will make&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Policy rule:&amp;nbsp;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="json"&gt;{
  "mode": "All",
  "policyRule": {
    "if": {
      "anyOf": [
        {
          "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays",
          "exists": false
        },
        {
          "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays",
          "less": "[parameters('Minimum_PITR')]"
        }
      ]
    },
    "then": {
      "effect": "modify",
      "details": {
        "roleDefinitionIds": [
          "/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c"
        ],
        "operations": [
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/servers/databases/backupShortTermRetentionPolicies/retentionDays",
            "value": "[parameters('Minimum_PITR')]"
          }
        ]
      }
    }
  },
  "parameters": {
    "Minimum_PITR": {
      "type": "Integer",
      "metadata": {
        "displayName": "Min PITR",
        "description": "Min PITR retention days"
      }
    }
  }
}&lt;/LI-CODE&gt;
&lt;P&gt;In this code&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;Field&lt;/STRONG&gt; is what we want to check and/or change; get the list of field names using PowerShell&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="powershell"&gt;$aliases = Get-AzPolicyAlias -ListAvailable -NamespaceMatch 'Microsoft.Sql' 
| where ResourceType -like 'retentionpol' 
| Select-Object -ExpandProperty 'Aliases' $aliases | select Name&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;For the list of fields that can be modified/updated, look at the Modifiable attribute&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="powershell"&gt;$aliases | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' } 
| select Name &lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Minimum_PITR&lt;/STRONG&gt; is the name of the parameter the assignment (next step) will pass in. You choose the name of the parameter&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;roleDefinitionIds&lt;/STRONG&gt; are the full GUID path of the roles that the update needs. The &lt;A href="https://learn.microsoft.com/en-us/azure/governance/policy/how-to/remediate-resources?tabs=azure-portal#configure-the-policy-definition" target="_blank" rel="noopener"&gt;policy remediation docs&lt;/A&gt; talk about this, but we can get the GUID with PowerShell&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="powershell"&gt;Get-AzRoleDefinition -name 'contributor' # replace contributor with the role needed&lt;/LI-CODE&gt;
&lt;P&gt;This definition is saying that if the PITR retention isn’t set, or is less than the parameter value, then make it (via addOrReplace) the parameter value.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816614"&gt;&lt;/A&gt;The Assignment&lt;/H3&gt;
&lt;P&gt;Once you save the definition, use “Assign policy” on the screen that appears&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;For this, there are several tabs&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Basics:
&lt;UL&gt;
&lt;LI&gt;Scope and exclusions let you work on less than the entire subscription&lt;/LI&gt;
&lt;LI&gt;enable “policy enforcement”&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Parameters
&lt;UL&gt;
&lt;LI&gt;Enter 9 for Min_PITR (to have policy apply 9 days as the minimum)&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Remediation
&lt;UL&gt;
&lt;LI&gt;Tick “create remediation task”&lt;/LI&gt;
&lt;LI&gt;Default is to use a system managed identity&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Then create this assignment&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816615"&gt;&lt;/A&gt;&amp;nbsp;Initial Remediation&lt;/H3&gt;
&lt;P&gt;Once the assignment is created, look at the compliance blade to see it; Azure policy is asynchronous, so for a newly created assignment, it takes a little while before it begins checking resources in its scope.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Similarly, “remediation tasks” on the remediation blade shows the task pending to begin with.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Once the initial remediation scan completes, you can look at the backup retention policies (in Data Management | backups) on the logical server(s) and see that the PITR retention periods have been increased to a minimum of 9 days.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816616"&gt;&lt;/A&gt;Ongoing operation&lt;/H3&gt;
&lt;P&gt;With the initial remediation complete, the policy will now intercept non-compliant changes, and refactor them on the fly. For example, if we use PowerShell to set the retention to 2 days&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;$DB_PITR = get-azsqldatabasebackupshorttermretentionpolicy -ResourceGroupName mylittlestarter-rg -ServerName mylittlesql -DatabaseName oppo
$DB_PITR | Set-AzSqlDatabaseBackupShortTermRetentionPolicy -RetentionDays 2


ResourceGroupName         : mylittlestarter-rg
ServerName                : mylittlesql
DatabaseName              : oppo
RetentionDays             : 9
DiffBackupIntervalInHours : 12&lt;/LI-CODE&gt;
&lt;P&gt;The update completes, but the summary shows that the retention stays as 9 days&lt;/P&gt;
&lt;P&gt;The experience on the portal is the same; we can change the retention to 1 day in the GUI, and the operation succeeds, but with the retention remaining at 9 days. In the activity log of either the logical server or the database, this shows up as a modify, with the JSON detail of the modify showing the policy name and the effect.&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816617"&gt;&lt;/A&gt;Tricky bits&lt;/H3&gt;
&lt;P&gt;A few challenges that can cause delays:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Retention policies are separate resources – Both short-term and long-term backup retention aren’t direct attributes of the database resource. Instead, they exist as their own resources (e.g., with retentionDays) tied to the database.&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Keep policies simple – Focusing each policy on a single resource (like SQL DB PITR) proved more effective than trying to create one large, all-encompassing policy.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;Case sensitivity matters – The policy definition code is case-sensitive, which can easily trip you up if not handled carefully.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;The definitionRoleID is just the GUID of the security role that the policy is going to need, not anything to do with the identity that’s created for the remediation task…but the GUID is potentially different for each subscription, hence the PowerShell to figure out this GUID&lt;/LI&gt;
&lt;LI&gt;Writing the definitions in PowerShell means that they are just plain-text, and don’t have any syntax helpers; syntax issues in the definition tend to appear as strange “error converting to JSON” messages.&lt;/LI&gt;
&lt;LI&gt;Waiting patiently for the initial policy remediation cycle to finish; I haven’t found any “make it so” options&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;References&lt;/H3&gt;
&lt;P&gt;The posts mentioned in the introduction are&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/automatically-enable-ltr-and-pitr-policy-upon-a-database-creation-on-azure-sql-m/3740040" target="_blank" rel="noopener"&gt;Automatically Enable LTR and PITR Policy upon a Database creation on Azure SQL Managed Instance | Microsoft Community Hub&lt;/A&gt; using audits and runbooks&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/azuredbsupport/azure-custom-policy-to-prevent-backup-retention-period-to-be-below-x-number---az/3967097" target="_blank" rel="noopener"&gt;Azure custom policy to prevent backup retention period to be below X number - Azure SQL | Microsoft Community Hub&lt;/A&gt; which uses ‘Deny’ to fail attempts that don’t meet the requirements.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816618"&gt;&lt;/A&gt;Expanding this using PowerShell&lt;/H2&gt;
&lt;P&gt;With a working example for SQL DB PITR, we now want to expand this to have policies that cover both short and long term retention for both SQL DB and SQL MI.&lt;/P&gt;
&lt;P&gt;The code below isn’t exhaustive, and being a sample, doesn’t have error checking; note that the code uses “less” for the policy test, but operators like “equals” and “greater” (&lt;A href="https://learn.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure-policy-rule#conditions" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/governance/policy/concepts/definition-structure-policy-rule#conditions&lt;/A&gt; ) are available to build more complex tests, depending on the policy requirements. The document &lt;A href="https://learn.microsoft.com/en-us/azure/governance/policy/how-to/programmatically-create" target="_blank" rel="noopener"&gt;Programmatically create policies - Azure Policy | Microsoft Learn&lt;/A&gt; covers using powershell with Azure policy.&lt;/P&gt;
&lt;P&gt;Other wrinkles that this sample doesn’t explicitly cater for include&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;LTR retentions are held in ISO 8601 format (eg, ‘P8D’ for 8 days), so it’s not trivial to do less than tests; in theory&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions" target="_blank" rel="noopener"&gt;ARM template functions&lt;/A&gt; could be used to convert these into the number of days, but this example just does an equality check, and enforces the policy, without any understanding that P4W is a longer period than P20D&lt;/LI&gt;
&lt;LI&gt;LTR&amp;nbsp; isn’t available for serverless databases with autopause enabled (&lt;A href="https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?view=azuresql&amp;amp;tabs=general-purpose#auto-pause" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?view=azuresql&amp;amp;tabs=general-purpose#auto-pause&lt;/A&gt; ); this would need some form of scope control, potentially either using resource groups, or a more complex test in the policy definition to look at the database attributes&lt;/LI&gt;
&lt;LI&gt;A few service levels, for example the Basic database SLO, have different limits for their short term retention&lt;/LI&gt;
&lt;LI&gt;PITR for databases that could be offline (stopped managed instances, auto-paused serverless databases, etc) hasn’t been explicitly tested.&lt;/LI&gt;
&lt;LI&gt;Remediation tasks just run to completion, with no rescheduling; to ensure that all existing databases are made compliant, this could be expanded to have a loop to check the count of resources needing remediation, and start a task if the relevant existing ones are complete&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="powershell"&gt;&amp;lt;# /***This Artifact belongs to the Data SQL Ninja Engineering Team***/

Name:     Enforce_SQL_PaaS_backup_retention.ps1
Author:   Databases SQL CSE/Ninja, Microsoft Corporation
Date:     August 2025
Version:  1.0

Purpose: This is a sample to create the Azure policy defintions, assignment and remediation tasks to enfore organisational policies for minimum short-term (PITR) and weekly long-term (LTR) backup retention.
  
Prerequisities:
- connect to your azure environment using Connect-AzAccount
- Register the resource provider (may already be done in your environment) using Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
- if needed to modify/update this script, this can be used to find field names:
Get-AzPolicyAlias -ListAvailable -NamespaceMatch 'Microsoft.Sql' | where ResourceType -like '*retentionpol*' | Select-Object -ExpandProperty 'Aliases' | select Name

      
Warranty: This script is provided on as "AS IS" basis and there are no warranties, express or implied, including, but not limited to implied warranties of merchantability or fitness for a particular purpose. USE AT YOUR OWN RISK. 
Feedback: Please provide comments and feedback to the author at datasqlninja@microsoft.com
#&amp;gt;


# parameters to modify

$Location = 'EastUS'        # the region to create the managed identities used by the remediation tasks
$subscriptionID = (Get-AzContext).Subscription.id  # by default use the current Subscription as the scope; change if needed

# the policies to create; PITR can do a less than comparison, but LTR has dates, so uses string equalities
[array]$policies = @()
$policies += @{type = 'DB'; backups='PITR'; name = 'Enforce SQL DB PITR retention'; ParameterName = 'Minimum_PITR'; ParameterValue = 9; Role = 'contributor'; Category='Backup'}
$policies += @{type = 'MI'; backups='PITR'; name = 'Enforce SQL MI PITR retention'; ParameterName = 'Minimum_PITR'; ParameterValue = 9; Role = 'contributor'; Category='Backup'}
# LTR retention is in ISO8601 format, eg P2W = 2 weeks, P70D = 70 days; 'PT0S' = no retention
$policies += @{type = 'DB'; backups='LTR';name = 'Enforce SQL DB LTR retention'; Weekly = 'P4W'; Monthly = 'PT0S'; Yearly = 'PT0S'; WeekofYear = 1; Role = 'contributor'; Category='Backup'}
$policies += @{type = 'MI'; backups='LTR';name = 'Enforce SQL MI LTR retention'; Weekly = 'P4W'; Monthly = 'PT0S'; Yearly = 'PT0S'; WeekofYear = 1; Role = 'contributor'; Category='Backup'}


# templates for the Policy definition code; this has placeholders that are replaced in the loop
$Policy_definition_template_PITR = @'
{
  "mode": "All",
  "policyRule": {
    "if": {
      "anyOf": [
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupShortTermRetentionPolicies/retentionDays",
          "exists": false
        },
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupShortTermRetentionPolicies/retentionDays",
          "less": "[parameters('&amp;lt;ParameterName&amp;gt;')]"
        }
      ]
    },
    "then": {
      "effect": "modify",
      "details": {
        "roleDefinitionIds": [
          "/providers/Microsoft.Authorization/roleDefinitions/&amp;lt;RoleGUID&amp;gt;"
        ],
        "operations": [
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupShortTermRetentionPolicies/retentionDays",
            "value": "[parameters('&amp;lt;ParameterName&amp;gt;')]"
          }
        ]
      }
    }
  },
  "parameters": {
    "&amp;lt;ParameterName&amp;gt;": {
      "type": "Integer"
    }
  }
}
'@

# LTR, look for any of the weekly/monthly/yearly retention settings not matching
$Policy_definition_template_LTR = @'
{
  "mode": "All",
  "policyRule": {
    "if": {
      "anyOf": [
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/weeklyRetention",
          "exists": false
        },
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/weeklyRetention",
          "notEquals": "[parameters('Weekly_retention')]"
        },
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/monthlyRetention",
          "notEquals": "[parameters('Monthly_retention')]"
        },
        {
          "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/yearlyRetention",
          "notEquals": "[parameters('Yearly_retention')]"
        }
      ]
    },
    "then": {
      "effect": "modify",
      "details": {
        "roleDefinitionIds": [
          "/providers/Microsoft.Authorization/roleDefinitions/&amp;lt;RoleGUID&amp;gt;"
        ],
        "operations": [
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/weeklyRetention",
            "value": "[parameters('Weekly_retention')]"
          },
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/monthlyRetention",
            "value": "[parameters('Monthly_retention')]"
          },
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/yearlyRetention",
            "value": "[parameters('Yearly_retention')]"
          },
          {
            "operation": "addOrReplace",
            "field": "Microsoft.Sql/&amp;lt;Type&amp;gt;/databases/backupLongTermRetentionPolicies/weekOfYear",
            "value": "[parameters('WeekofYear')]"
          }
        ]
      }
    }
  },
  "parameters": {
    "Weekly_retention": {
      "type": "String"
    },
    "Monthly_retention": {
      "type": "String"
    },
    "Yearly_retention": {
      "type": "String"
    },
    "WeekofYear": {
      "type": "Integer"
    }
  }
}
'@

# main loop

foreach ($policy in $policies)
{
    # translate the Role name into its GUID
    $Role = Get-AzRoleDefinition -name $($policy.Role)
    $type = $policy.type -replace 'MI','managedInstances' -replace 'DB','servers'
    $template = if ($policy.backups -eq 'PITR') {$Policy_definition_template_PITR} else {$Policy_definition_template_LTR}
    # generate the definition code for this policy
    $policy_definition = $template -replace '&amp;lt;Type&amp;gt;',$type -replace '&amp;lt;RoleGUID&amp;gt;',$($Role.Id) -replace '&amp;lt;ParameterName&amp;gt;',$policy.ParameterName 

    # create the policy definition
    $PolicyDefinition = new-AzPolicyDefinition -Name $($policy.name) -Policy $policy_definition -Metadata "{'category':'$($policy.Category)'}"
    
    # create the assignment
    if ($policy.backups -eq 'PITR')
        {
            $PolicyParameters = @{$($policy.ParameterName)=($($policy.ParameterValue))}
        }
        else
        {   
            $PolicyParameters = @{"Weekly_retention"=($($policy.Weekly)); "Monthly_retention"=($($policy.Monthly)); "Yearly_retention"=($($policy.Yearly)); "WeekofYear"=($($policy.WeekofYear));}
        }
        $PolicyAssignment = New-AzPolicyAssignment -Name $($policy.name) -PolicyDefinition $PolicyDefinition -PolicyParameterObject $PolicyParameters -IdentityType 'SystemAssigned' -Location $Location

    # now follow the docs page to wait for the ID to be created, and assign the roles required to it; https://learn.microsoft.com/en-us/azure/governance/policy/how-to/remediate-resources?tabs=azure-powershell
    # include a loop to wait until the managed identity created as part of the assignment creation is available
    do
    {
      $ManagedIdentity = Get-AzADServicePrincipal -ObjectId $PolicyAssignment.IdentityPrincipalId -erroraction SilentlyContinue
      if (!($ManagedIdentity)) {start-sleep -Seconds 1} # wait for a bit...
    }
    until ($ManagedIdentity)
    $roleDefinitionIds = $PolicyDefinition.PolicyRule.then.details.roleDefinitionIds
    
    if ($roleDefinitionIds.Count -gt 0)
    {
        $roleDefinitionIds | ForEach-Object {
            $roleDefId = $_.Split("/") | Select-Object -Last 1
            $roleAssigned = New-AzRoleAssignment -ObjectId $PolicyAssignment.IdentityPrincipalId -RoleDefinitionId $roleDefId -Scope "/subscriptions/$($subscriptionID)"
        }
    }

    # lastly create the remediation task
    $RemediationTask = Start-AzPolicyRemediation -Name $($policy.name) -PolicyAssignmentId $PolicyAssignment.Id 
}

# confirm that the policies have been set up
Get-AzPolicyDefinition | where name -In $policies.name | format-table Name, PolicyType
Get-AzPolicyAssignment | where name -In $policies.name | format-table Name, Parameter&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc205816619"&gt;&lt;/A&gt;&amp;nbsp;Feedback and suggestions&lt;/H2&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 08 Sep 2025 15:56:52 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/enforcing-sql-paas-backup-retention-with-azure-policy/ba-p/4443657</guid>
      <dc:creator>David_Lyth</dc:creator>
      <dc:date>2025-09-08T15:56:52Z</dc:date>
    </item>
    <item>
      <title>Migrating Oracle Partitioned Tables to Azure PostgreSQL Without Altering Partition Keys</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/migrating-oracle-partitioned-tables-to-azure-postgresql-without/ba-p/4441962</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;When migrating partitioned tables from Oracle to Azure PostgreSQL Flexible Server, many customers prefer to preserve their existing Oracle table design exactly as defined in the original DDLs. Specifically, they want to avoid altering the partition key structure, especially by not adding the partition key to any primary or unique constraints, because doing so would change the table’s original design integrity.&lt;/P&gt;
&lt;P&gt;The challenge arises because PostgreSQL enforces a rule: any primary key or unique constraint on a partitioned table must include the partition key. This difference in constraint handling creates a migration roadblock for customers aiming for a like-for-like move from Oracle without schema changes.&lt;/P&gt;
&lt;P&gt;To bridge this gap and emulate Oracle’s partitioning behavior, the pg_partman extension offers a practical solution. It supports declarative partitioning in PostgreSQL while eliminating the need to modify primary or unique constraints to include the partition key. This enables successful migrations while preserving complete compatibility with Oracle’s partitioning model and eliminating the need for schema changes.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Background&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For example, consider the following Oracle “Orders” table partitioned by the order_date column.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;CREATE TABLE orders (
    order_id NUMBER PRIMARY KEY,
    customer_id NUMBER NOT NULL,
    order_date DATE NOT NULL,
    status TEXT,
    total_amount NUMERIC(10,2)
) PARTITION BY RANGE (order_date);
CREATE TABLE orders_2025_m1 PARTITION OF orders FOR VALUES FROM ('2024-12-01') TO ('2025-01-01');
CREATE TABLE orders_2025_m2 PARTITION OF orders FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE orders_2025_m3 PARTITION OF orders FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
CREATE TABLE orders_2025_m4 PARTITION OF orders FOR VALUES FROM ('2025-03-01') TO ('2025-04-01');
&lt;/LI-CODE&gt;
&lt;P&gt;In Oracle, it’s valid to define a primary key only on order_id without including the partition key (order_date). Many customers want to preserve this design when migrating to Azure PostgreSQL Flexible Server. However, Azure PostgreSQL Flexible Server requires that any primary or unique constraint on a partitioned table must also include the partition key. Attempting to keep a primary key solely on order_id will result in an error.&lt;/P&gt;
&lt;P&gt;To replicate the Oracle’s behavior, the pg_partman extension along with the table template can be used. It allows partition management without forcing the partition key into primary or unique constraints, enabling the migration to retain the original table structure.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;    CREATE TABLE orders (
        order_id      BIGINT PRIMARY KEY,
        customer_id   BIGINT NOT NULL,
        order_date    DATE NOT NULL,
        status        VARCHAR(20),
        total_amount  NUMERIC(10,2)
    )
    PARTITION BY RANGE (order_date);

unique constraint on partitioned table must include all partitioning columns
DETAIL: PRIMARY KEY constraint on table "orders" lacks column "order_date" which is part of the partition key.
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enable the Server Level Parameters for PG_PARTMAN&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To configure the server-level parameter, go to the Azure portal, open the left-hand panel, and search for ‘Server Parameters’ under the Settings section. Then, search for azure.extensions, check the box for PG_PARTMAN in the value field, and click &lt;STRONG&gt;Save&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once the above is completed search for “shared_preload_libraries” and in the value section click the checkbox for PG_PARTMAN_BGW and then click SAVE.&lt;/P&gt;
&lt;P&gt;The above step would prompt the restart of the server.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Prerequisites at Database level&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once the server is restarted login to the database either by using PgAdmin or through psql. And set up the role and following permissions.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;CREATE ROLE partman_role WITH LOGIN; 
CREATE SCHEMA partman; 
CREATE EXTENSION pg_partman SCHEMA partman;  --- Create extension if not already created
GRANT ALL ON SCHEMA partman TO partman_role; 
GRANT ALL ON ALL TABLES IN SCHEMA partman TO partman_role; 
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA partman TO partman_role; 
GRANT EXECUTE ON ALL PROCEDURES IN SCHEMA partman TO partman_role; 
GRANT ALL ON SCHEMA public TO partman_role; 
GRANT TEMPORARY ON DATABASE postgres to partman_role; 

&lt;/LI-CODE&gt;
&lt;P&gt;And if you have the partition table not part of partman_role ensure that usage on the schema granted. Example:- GRANT USAGE ON SCHEMA partman TO partman_role;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create partition table without Primary Key&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Create the parent table including the partition key without including the Primary key.&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;CREATE TABLE orders (
        order_id      BIGINT,
        customer_id   BIGINT NOT NULL,
        order_date    DATE NOT NULL,
        status        VARCHAR(20),
        total_amount  NUMERIC(10,2)
    )
    PARTITION BY RANGE (order_date);
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create a Template Table&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;In order to use the Primary Key part of the table and not to include it in the partition key use the template as shown below. Notice that it’s the same structure as parent table and included primary key for the column order_id.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;    CREATE TABLE orders_template (
        order_id      BIGINT ,
        customer_id   BIGINT NOT NULL,
        order_date    DATE NOT NULL,
        status        VARCHAR(20),
        total_amount  NUMERIC(10,2),
    PRIMARY KEY (order_id) 
    );
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create parent table using Pg_Partman&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once the above tables are completed, the next step is to invoke the create_parent function as shown below.&lt;/P&gt;
&lt;LI-CODE lang=""&gt;SELECT partman.create_parent(
    p_parent_table := 'public.orders',
    p_control := 'order_date',
    p_type := 'native',
    p_interval := 'monthly',
    p_template_table := 'public.orders_template'
);&lt;/LI-CODE&gt;
&lt;P&gt;Notice that, the above script included orders_template table as a parameter for template table, this would ensure that partitions are created with the primary keys automatically.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Validate the partition table&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;After inserting the records validate the partitions created&lt;/P&gt;
&lt;LI-CODE lang=""&gt;SELECT tableoid::regclass AS partition, * FROM orders;&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;EXPLAIN SELECT * FROM orders WHERE order_date &amp;gt; '2025-01-01';&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang=""&gt;EXPLAIN SELECT * FROM orders WHERE order_id &amp;gt; 100;&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The query plan above shows that the partition key (order_date) is primarily used for date-range queries, independent of the primary key. In contrast, queries filtering by order_id rely on the primary key, which is defined separately from the partition key.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Feedback and Suggestions&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;If you have feedback or suggestions for improving this asset, please contact the Data SQL Ninja Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;).&lt;BR /&gt;Note: For additional information about migrating various source databases to Azure, see the&amp;nbsp;&lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Thank you for your support!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 21 Aug 2025 19:17:06 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/migrating-oracle-partitioned-tables-to-azure-postgresql-without/ba-p/4441962</guid>
      <dc:creator>VenkatMR</dc:creator>
      <dc:date>2025-08-21T19:17:06Z</dc:date>
    </item>
    <item>
      <title>Key Considerations to avoid Implicit Conversion issues in Oracle to Azure SQL Modernization</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/key-considerations-to-avoid-implicit-conversion-issues-in-oracle/ba-p/4442186</link>
      <description>&lt;H4 aria-level="7"&gt;&amp;nbsp;&lt;/H4&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Overview&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This blog dives into the mechanics of implicit data type conversions and their impact during the post-migration performance optimization phase of heterogeneous database migrations. Drawing from our observed Engineering field patterns across diverse application architectures, this blog explores why certain platforms like ORMs, JDBC drivers, and cross-platform data models are more prone to implicit conversions than others and how they result in performance issues, break indexes, or cause query regressions. You'll gain actionable strategies to detect, mitigate, and design around these pitfalls to ensure a successful and performant data migration to platforms like Azure SQL.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Understanding Implicit Conversions in Database Migrations&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When an application communicates with a database whether through JDBC, ODBC, ADO.NET, or any other data access API it sends parameters and query values in a specific data type format. However, if the type of the value provided by the application does not exactly match the data type of the target column in the database, the database engine attempts to reconcile the mismatch automatically. This automatic adjustment is known as implicit conversion. For instance, a value passed as a string from the application may be compared against a numeric or date column in the database. This occurs because many front-end systems and APIs transmit values as strings by default, even if the underlying business logic expects numbers or dates. Unless the application explicitly parses or casts these values to match the expected types, the database engine must decide how to handle the type mismatch during query execution. In such cases, the engine applies type conversion internally, either to the parameter or the column, based on its own rules. While this feature can simplify application development by allowing flexible data handling, it often introduces engine-specific behavior that becomes more visible during cross engine database migrations, where assumptions built into one system may not hold true in another. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Impact of Implicit Conversions&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Implicit conversions can adversely affect database performance and functionality in several ways, some of which are discussed below:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Performance Degradation&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt; When a database performs an implicit conversion, it may bypass indexes, resulting in slower query execution. For example, comparing a VARCHAR column to an INT value in SQL Server can trigger a table scan instead of an index&amp;nbsp;seek, significantly increasing query time.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Migration-Specific Issues and&amp;nbsp;Data Integrity Risks&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt; Implicit conversions can cause data loss or incorrect results during a few instances and one such example is, when a column defined as VARCHAR2 in Oracle, which can store Unicode characters by default is mapped to a VARCHAR column in SQL Server, non-ASCII characters such as Chinese, Russian, or Korean may be silently replaced with incorrect characters/symbols. One example of scenario when this can happen:&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Oracle VARCHAR2 stores Unicode if the database character set is UTF-8 (AL32UTF8), which is common in modern Oracle installations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; SQL Server VARCHAR is ANSI/code-page based, so non-ASCII characters are stored differently, unless the column is explicitly declared as&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; NVARCHAR.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-align-justify lia-indent-padding-left-60px"&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:true,&amp;quot;134233118&amp;quot;:true,&amp;quot;201341983&amp;quot;:0,&amp;quot;335559740&amp;quot;:240}"&gt;-- In Real World this can happen on any other data types&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Maintenance Challenges&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&amp;nbsp;&lt;/STRONG&gt;&lt;/EM&gt;Queries relying on implicit conversions are harder to debug and optimize, as these conversions are not explicitly visible in the code and may only surface during performance regressions. &amp;nbsp;These queries forces the optimizer to compile an execution plan containing scans of large clustered indexes, or tables, instead of a seek resulting in degraded performance&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;Execution Overhead and Resource Consumption&lt;/STRONG&gt;&lt;/EM&gt;:&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; Implicit conversions increase execution times for both queries and API calls, as the engine must perform runtime casting operations. This can lead to higher CPU usage, increased logical reads, and memory pressure.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Detection Methods&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Detecting implicit conversions is crucial for optimizing database performance post-migration. The following methods can be employed to detect:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;Query Store (QDS):&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Use QDS post-migration during load testing to track expensive queries based on cost and surface performance regressions caused by type mismatches. Review execution plans captured in QDS for conversion-related patterns.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt; You can also use custom script like below to query the QDS:&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;USE &amp;lt;[Replace_with_actual_DB_name]&amp;gt; -- Replace with the actual database name 
GO 

SELECT TOP (100) DB_NAME() AS [Database], qt.query_sql_text AS [Consulta], 
rs.last_execution_time AS [Last Execution Time], 
rs.avg_cpu_time AS [Avg Worker Time], rs.max_cpu_time AS [Max Worker Time], rs.avg_duration AS [Avg Elapsed Time], 
rs.max_duration AS [Max Elapsed Time], rs.avg_logical_io_reads AS [Avg Logical Reads], 
rs.max_logical_io_reads AS [Max Logical Reads], rs.count_executions AS [Execution Count], 
q.last_compile_start_time AS [Creation Time], CAST(p.query_plan AS XML) AS [Query Plan] 
FROM sys.query_store_query_text AS qt JOIN sys.query_store_query AS q 
ON qt.query_text_id = q.query_text_id JOIN sys.query_store_plan AS p 
ON q.query_id = p.query_id JOIN sys.query_store_runtime_stats AS rs 
ON p.plan_id = rs.plan_id 
WHERE CAST(p.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%' 
AND qt.query_sql_text NOT LIKE '%sys.query_store%' 
ORDER BY rs.avg_cpu_time DESC;&lt;/LI-CODE&gt;
&lt;P class=""&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Execution Plans&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt; For the expensive queries, in SSMS, hover over operators like Index Scan to inspect the Predicate. If implicit conversion exists, the plan includes something like “CONVERT_IMPLICIT(&amp;lt;data_type&amp;gt;, ..”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;XML Plan:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;For a confirmation of above, reviewing the underlying XML execution plan confirms whether implicit conversion is occurring and on which side of the comparison. This technique is particularly valuable when working with parameterized queries or when graphical plan warnings are insufficient. Look for elements like below in the XML plan:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN data-contrast="auto"&gt;&amp;lt;Warnings&amp;gt; &amp;lt;PlanAffectingConvert ConvertIssue="Seek Plan" Expression="CONVERT_IMPLICIT(.. &amp;lt;/Warnings&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Plan Cache Inspection:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Custom scripts can be written to scan the Azure SQL plan cache for any instances of CONVERT_IMPLICIT operations. Below is one such script that can be used to find.&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;SELECT TOP (100) DB_NAME(B.[dbid]) AS [Database], B.[text] AS [SQL_text], 
A.total_worker_time AS [Total Worker Time], A.total_worker_time / A.execution_count AS [Avg Worker Time], 
A.max_worker_time AS [Max Worker Time], A.total_elapsed_time / A.execution_count AS [Avg Elapsed Time], 
A.max_elapsed_time AS [Max Elapsed Time], A.total_logical_reads / A.execution_count AS [Avg Logical Reads],
 A.max_logical_reads AS [Max Logical Reads], A.execution_count AS [Execution Count], 
A.creation_time AS [Creation Time], C.query_plan AS [Query Plan] 
FROM sys.dm_exec_query_stats AS A WITH (NOLOCK) 
CROSS APPLY sys.dm_exec_sql_text(A.plan_handle) AS B CROSS APPLY sys.dm_exec_query_plan(A.plan_handle) AS C 
WHERE CAST(C.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%' 
AND B.[dbid] = DB_ID() AND B.[text] NOT LIKE '%sys.dm_exec_sql_text%' 
ORDER BY A.total_worker_time DESC&lt;/LI-CODE&gt;
&lt;P class=""&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;XE event: &lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;Extended Events (XE) is valuable in support scenarios when Query Store or telemetry data alone can't pinpoint issues like implicit conversions, especially if plans aren't cached or historical data lacks detail. XE provides real-time capture of plan-affecting convert events, offering granular insights into query behavior that QS might miss during short-lived or dynamic workloads. However, use it sparingly due to overhead, as a targeted diagnostic tool rather than a broad solution. You can use below script to turn it off. Remember to stop and drop the event immediately when you are done collecting.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'Detect_Conversion_Performance_Issues')
    DROP EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER;
GO

CREATE EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER 
ADD EVENT sqlserver.plan_affecting_convert(
    ACTION(sqlserver.database_name, sqlserver.sql_text)
    WHERE ([sqlserver].[database_name] = N'&amp;lt;Replace_with_your_DB_name&amp;gt;')  -- Replace your DB name
)
ADD TARGET package0.ring_buffer 
WITH (
    MAX_MEMORY = 4096 KB,
    EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS,
    MAX_DISPATCH_LATENCY = 30 SECONDS,
    MEMORY_PARTITION_MODE = NONE,
    TRACK_CAUSALITY = OFF,
    STARTUP_STATE = OFF
);
GO

ALTER EVENT SESSION [Detect_Conversion_Performance_Issues] ON SERVER STATE = START;

-- View the raw Extended Events buffer
SELECT 
    s.name AS session_name,
    t.target_name,
    CAST(t.target_data AS XML) AS raw_buffer_xml
FROM sys.dm_xe_sessions s
JOIN sys.dm_xe_session_targets t ON s.address = t.event_session_address
WHERE s.name = 'Detect_Conversion_Performance_Issues';&lt;/LI-CODE&gt;
&lt;P class=""&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Documentation Reference:&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;Microsoft docs on&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/sql/t-sql/data-types/data-type-conversion-database-engine" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;conversion&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;and&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://learn.microsoft.com/sql/t-sql/data-types/data-type-precedence-transact-sql" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;precedence&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-contrast="auto"&gt; help explain engine behavior and mappings around implicit conversion triggers. This close look at them along with app developers during the schema/code conversion phase can help better understanding and mitigation.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Implicit Conversion: Real-World&amp;nbsp;Example&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;To evaluate the impact of implicit conversions in Azure SQL during post-migration scenarios, we created a synthetic workload example using a table named dbo.Customers. It contains one million rows and includes columns such as AccountNumber, CustomerName, PhoneNumber, and JoinDate. The AccountNumber, CustomerName, and PhoneNumber columns were initially defined as VARCHAR, and N&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;onclustered&amp;nbsp;indexes&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt; were created on these fields to enable efficient lookups. From the application layer, parameters were passed using NVARCHAR, which mirrors typical real-world ORM behavior particularly in Java-based applications or when migrating from Oracle, where VARCHAR2 frequently stores Unicode characters. This deliberate mismatch allows us to study the real performance consequences of implicit conversions in Azure SQL’s query execution engine. Although enabling SET STATISTICS XML ON can expose implicit conversions during query execution, our approach tries to reflect how these issues are usually uncovered in real-world scenarios where customers are less aware of this issue. In this case, we used Query Store and execution plan XML inspection&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H6 aria-level="6"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;Problem: Implicit Conversion Due to Type Mismatch:&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;/H6&gt;
&lt;P aria-level="6"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;A NVARCHAR parameter from the application is compared against a VARCHAR column in the database.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;This scenario highlights a silent performance regression that can go unnoticed post-migration without detailed plan inspection&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;.&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;Query Used: DECLARE ACC NVARCHAR(20) = N’ACC000500’; 

SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber 
FROM dbo.Customers 
WHERE AccountNumber = ACC;&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Execution Plan Behavior:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN data-contrast="auto"&gt;Fig: 1&lt;/SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;SQL Server applies an implicit CONVERT_IMPLICIT(nvarchar, AccountNumber) on the column side as we can see from Fig 1 when you hover to Index Scan and see the Predicate&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;This disables the use of the&amp;nbsp;nonclustered&amp;nbsp;index on&amp;nbsp;AccountNumber, leading to an Index Scan&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;The&amp;nbsp;XML&amp;nbsp;plan includes a&amp;nbsp;&amp;lt;Warning&amp;gt;&amp;nbsp;tag&amp;nbsp;under &amp;lt;PlanAffectingConvert&amp;gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Extended Events&amp;nbsp;monitoring&amp;nbsp;consistently shows "plan_affecting_convert" warnings&amp;nbsp;indicating&amp;nbsp;suboptimal query plans caused by these conversions&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What's Missing:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Type alignment between the query parameter and the column.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Awareness that even matching string lengths&amp;nbsp;won’t&amp;nbsp;help if encoding mismatches exist.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Impact:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Index Seek is lost, and full scans are triggered.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Higher&amp;nbsp;Exec times and Overall&amp;nbsp;costs&amp;nbsp;observed.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H6 aria-level="6"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;Mitigation via Explicit CAST – Matching the Column’s Type&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;/H6&gt;
&lt;P aria-level="6"&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;In some cases, especially during post-migration tuning, application teams may not be able to change the database schema, but developers can update the query to explicitly align data types. This scenario simulates such a mitigation where an NVARCHAR parameter is explicitly cast to VARCHAR to match the column’s data type and avoid implicit conversions.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;Query Used: DECLARE ACC NVARCHAR(20) = N'ACC000500'; SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber FROM dbo.Customers WHERE AccountNumber = CAST(@acc AS VARCHAR(20)); -- Explicit use of CAST&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Execution Plan Behavior:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;SPAN data-contrast="auto"&gt;Fig:&amp;nbsp;2&lt;/SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;The CAST operation ensures that the parameter side matches the VARCHAR column type.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;SQL performs an Index Seek on the IX_AccountNumber index. The Seek Predicates as seen in Fig 2 confirms this showing Scalar “Operator(CONVERT..”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="7" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;No &amp;lt;Warning&amp;gt; tag appears in the XML execution plan&amp;nbsp;indicating&amp;nbsp;the absence of implicit conversions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:1440}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What's&amp;nbsp;Fixed:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Type mismatch is resolved on the query side without altering the database schema.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;The query is now&amp;nbsp;SARGable, enabling index usage.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Impact:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Index Seek is lost, and full scans are triggered.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Higher&amp;nbsp;Exec times and Overall&amp;nbsp;costs&amp;nbsp;observed.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What's&amp;nbsp;Still&amp;nbsp;Missing:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;This&amp;nbsp;still&amp;nbsp;creates a long-term maintainability concern, especially when many queries or columns are affected.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Developers must remember to manually CAST in every affected query, increasing code complexity and the chance of inconsistency.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="7" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Missed CASTs in other queries can still cause implicit conversions, so the issue&amp;nbsp;isn’t&amp;nbsp;eliminated&amp;nbsp;just patched locally.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:1440}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H6 aria-level="6"&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;Fix at DB end&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;&amp;nbsp;–&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 6"&gt;Parameter Usage aligned Schema Column type&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335559738&amp;quot;:40,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;/H6&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This fix involves altering the column type to NVARCHAR, aligning it with the NVARCHAR parameter passed from the application. It&amp;nbsp;eliminates&amp;nbsp;implicit conversions and enables index&amp;nbsp;seeks, improving performance. However,&amp;nbsp;it’s&amp;nbsp;a database-side&amp;nbsp;adjustment,&amp;nbsp; the&amp;nbsp;ideal long-term fix lies in ensuring the application sends parameters matching the original column type.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;Query Used: DECLARE ACC NVARCHAR(20) = N'ACC000500'; 

SELECT CustomerID, AccountNumber, CustomerName, PhoneNumber 
FROM dbo.Customers 
WHERE AccountNumber = CAST(@acc AS VARCHAR(20));&lt;/LI-CODE&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Execution Plan Behavior:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Fig:&amp;nbsp;3&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:758}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="8" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;The CAST operation ensures that the parameter side matches the VARCHAR column type.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="9" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;As seen in Fig 3 an Index Seek is performed on the updated IX_AccountNumber index. The Seek Predicates confirm this showing “Scalar Operator(..”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="10" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;No &amp;lt;Warning&amp;gt; tag appears in the XML execution plan&amp;nbsp;indicating&amp;nbsp;the absence of implicit conversions.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:1440}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What's&amp;nbsp;Fixed:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="8" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;The fix is schema-driven and works universally, ensuring consistent performance across tools and interfaces.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="9" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Encoding alignment between the parameter and column removes conversion logic entirely, making query plans stable and predictable.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Impact:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Indexes&amp;nbsp;remain&amp;nbsp;fully usable without manual intervention in queries.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Application code stays&amp;nbsp;clean,&amp;nbsp;no casts or workarounds are needed.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="16" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="7" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;This is the most sustainable fix but may require coordination with application and DB teams.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What's&amp;nbsp;Still&amp;nbsp;Missing:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:720}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="o" data-font="Courier New" data-listid="15" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:1440,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Courier New&amp;quot;,&amp;quot;469769242&amp;quot;:[9675],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;o&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;multilevel&amp;quot;}" data-aria-posinset="10" data-aria-level="2"&gt;&lt;SPAN data-contrast="auto"&gt;Some of implications around data type changes&amp;nbsp;will&amp;nbsp;be&amp;nbsp;around data types consuming&amp;nbsp;additional&amp;nbsp;storage,&amp;nbsp;for example&amp;nbsp;NVARCHAR consumes 2 bytes/char, increasing storage&amp;nbsp;when compared to VARCHAR.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{&amp;quot;335559685&amp;quot;:1440}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H6&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;Implicit vs Explicit vs Aligned: Execution Plan Behavior Comparison&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;/H6&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 100%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;Scenario&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;Predicate Expression&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;&amp;nbsp;in Exec Plan&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;Implicit Conversion&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;Index Seek Used&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;XML&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Strong"&gt;Plan Warning Shown&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335551550&amp;quot;:2,&amp;quot;335551620&amp;quot;:2}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Data Type Mismatch&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;CONVERT_IMPLICIT(nvarchar,&amp;nbsp;AccountNumber) = &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="3093839" data-lia-user-login="ACC" class="lia-mention lia-mention-user"&gt;ACC​&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No (results in scan)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;PlanAffectingConvert&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Explicit Cast in Query&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;AccountNumber&amp;nbsp;=&amp;nbsp;CONVERT(varchar, &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="3093839" data-lia-user-login="ACC" class="lia-mention lia-mention-user"&gt;ACC​&lt;/a&gt;)&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Matching Data Types (NVARCHAR)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-21"&gt;AccountNumber&amp;nbsp;= &lt;a href="javascript:void(0)" data-lia-user-mentions="" data-lia-user-uid="3093839" data-lia-user-login="ACC" class="lia-mention lia-mention-user"&gt;ACC​&lt;/a&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Yes&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center"&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;No&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P aria-level="7"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4 aria-level="7"&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Best Practices for Managing Implicit Conversions&amp;nbsp;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;STRONG&gt;&lt;EM&gt;Refactoring Application:&lt;/EM&gt;&lt;/STRONG&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;Legacy systems, especially those using dynamic SQL or lacking strict type enforcement, are prone to implicit conversion issues. Refactor your application code to leverage strongly typed variables and parameter declarations to ensure data type consistency at the source, minimizing implicit conversions during query execution.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134233117&amp;quot;:false,&amp;quot;134233118&amp;quot;:false,&amp;quot;335551550&amp;quot;:0,&amp;quot;335551620&amp;quot;:0,&amp;quot;335559738&amp;quot;:0,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Explicit Data Type Casting&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Use CAST or CONVERT functions to explicitly define conversions, reducing reliance on implicit behavior.&amp;nbsp;In our&amp;nbsp;example&amp;nbsp;we have used CAST, but a CONVERT function would have worked equally well. Both approaches explicitly align the parameter type to the column and avoid implicit conversions, enabling index&amp;nbsp;seek.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Data Type Alignment&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;When you are&amp;nbsp;performing&amp;nbsp;heterogenous&amp;nbsp;migrations which involve&amp;nbsp;different&amp;nbsp;database engines,&amp;nbsp;ensure data types are consistent between&amp;nbsp;source and target DB engines.&amp;nbsp;Check&amp;nbsp;official documents&amp;nbsp;thoroughly to know and see the&amp;nbsp;nuances around your data and application&amp;nbsp;convertibility&amp;nbsp;and&amp;nbsp;know the implications&amp;nbsp;like&amp;nbsp;additional&amp;nbsp;storage, collation changes etc.&amp;nbsp;that can negatively affect your business.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Indexing&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;: Create indexes on columns frequently involved in WHERE filters and JOIN predicates with matching data types to avoid implicit conversions that would cause index seeks to degrade into scans and ensures optimal index utilization by the optimizer.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Early&amp;nbsp;Testing&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;Conduct thorough post-migration testing using&amp;nbsp;QDS&amp;nbsp;to&amp;nbsp;identify, then&amp;nbsp;drill&amp;nbsp;down on&amp;nbsp;execution plans and performance metrics to&amp;nbsp;identify&amp;nbsp;and resolve conversion-related issues.&amp;nbsp;Early&amp;nbsp;collaboration between&amp;nbsp;Developer and&amp;nbsp;DBA teams&amp;nbsp;will be&amp;nbsp;crucial.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="auto"&gt;Tools and Scripts&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;&lt;STRONG&gt;:&lt;/STRONG&gt;&lt;/EM&gt; Utilize SQL Server Migration Assistant (SSMA) for Oracle to identify and change mappings early when you know your application needs. Additionally, use can use custom scripts or third-party tools if necessary to detect implicit conversions in the plan cache.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-15"&gt;References&amp;nbsp;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/H4&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/sql/t-sql/data-types/data-type-conversion-database-engine" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;https://learn.microsoft.com/sql/t-sql/data-types/data-type-conversion-database-engine&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/sql/t-sql/data-types/data-type-precedence-transact-sql" target="_blank" rel="noopener"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;https://learn.microsoft.com/sql/t-sql/data-types/data-type-preced&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;e&lt;/SPAN&gt;&lt;SPAN data-ccp-charstyle="Hyperlink"&gt;nce-transact-sql&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H4&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Final Thoughts&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;We hope that this post has helped&amp;nbsp;&lt;SPAN data-contrast="auto"&gt;you gain actionable strategies to detect, mitigate, and design around implicit conversions in order to ensure a successful and performant data migration to platforms such as SQL Server or Azure SQL&lt;/SPAN&gt;. If you have feedback or suggestions for improving this post, please contact the&amp;nbsp;&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;Azure Databases SQL Customer Success Engineering Team&lt;/A&gt;. Thanks for your support!&lt;/P&gt;</description>
      <pubDate>Tue, 09 Sep 2025 05:27:35 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/key-considerations-to-avoid-implicit-conversion-issues-in-oracle/ba-p/4442186</guid>
      <dc:creator>Nitish_reddy_kotha</dc:creator>
      <dc:date>2025-09-09T05:27:35Z</dc:date>
    </item>
    <item>
      <title>Optimized Data Transfer from Sybase ASE to Azure SQL via Chunked BCP Processing</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/optimized-data-transfer-from-sybase-ase-to-azure-sql-via-chunked/ba-p/4436624</link>
      <description>&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Enterprises upgrading legacy databases often face challenges in migrating complex schemas and efficiently transferring large volumes of data. Transitioning from SAP ASE (Sybase ASE) to Azure SQL Database is a common strategy to take advantage of enhanced features, improved scalability, and seamless integration with Microsoft services. With business growth, the limitations of the legacy system become apparent, performance bottlenecks, high maintenance costs, and difficulty in integrating with modern cloud solutions.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/ssma/sybase/getting-started-with-ssma-for-sybase-sybasetosql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;SQL Server Migration Assistant for SAP Adaptive Server Enterprise (&lt;/A&gt;SSMA) Automates migration from SAP ASE to SQL Server, Azure SQL Database and Azure SQL Managed Instance. &amp;nbsp;While SSMA provides a complete end-to-end migration solution, the custom BCP script &lt;SPAN class="lia-text-color-6"&gt;(&lt;STRONG&gt;ASEtoSQLdataloadusingbcp.sh&lt;/STRONG&gt;)&lt;/SPAN&gt; enhances this process by enabling parallel data transfers, making it especially effective for migrating large databases with minimal downtime.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Script Workflow&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/H2&gt;
&lt;P&gt;One of the most common challenges we hear from customers migrating from Sybase ASE to SQL Server is: “How can we speed up data transfer for large tables without overwhelming the system?” When you are dealing with hundreds of tables or millions of rows, serial data loads can quickly become a bottleneck.&lt;/P&gt;
&lt;P&gt;To tackle this, we created a script called&lt;SPAN class="lia-text-color-6"&gt; &lt;STRONG&gt;ASEtoSQLdataloadusingbcp.sh&lt;/STRONG&gt; &lt;/SPAN&gt;that automates and accelerates the data migration process using parallelism. It starts by reading configuration settings from external files and retrieves a list of tables, either from the source database or from a user-provided file. For each table, the script checks if it meets criteria for chunking based on available indexes. If it does, the table is split into multiple views, and each view is processed in parallel using BCP, significantly reducing the overall transfer time. If chunking is not possible, the script performs a standard full-table transfer.&lt;/P&gt;
&lt;P&gt;Throughout the entire process, detailed logging ensures everything is traceable and easy to monitor. This approach gives users both speed and control , helping migrations finish faster without sacrificing reliability.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Prerequisites&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Before running the script, ensure the following prerequisites are met:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Database schema is converted and deployed using &lt;A href="https://learn.microsoft.com/en-us/sql/ssma/sybase/sql-server-migration-assistant-for-sybase-sybasetosql?view=sql-server-ver17" target="_blank" rel="noopener"&gt;SQL Server Migration Assistant&lt;/A&gt; (SSMA).&lt;/LI&gt;
&lt;LI&gt;Both the source (SAP ASE) and target (Azure SQL DB) databases are accessible from the host system running the script.&lt;/LI&gt;
&lt;LI&gt;Source ASE database should be hosted on Unix or Linux.&lt;/LI&gt;
&lt;LI&gt;The target SQL Server can be hosted on Windows, Linux, or as an Azure.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Configuration Files&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The configuration aspect of the solution is designed for clarity and reuse. All operational parameters are defined in external files, this script will use following external config files during&amp;nbsp; &lt;BR /&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-14"&gt;&lt;STRONG&gt;&lt;U&gt;bcp_config.env&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;BR /&gt;&lt;BR /&gt;The primary configuration file, &lt;STRONG&gt;bcp_config.env&lt;/STRONG&gt;, contains connection settings and control flags. In the screenshot below you can see the format of the file.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;&amp;nbsp;&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-14"&gt;&lt;STRONG&gt;&lt;U&gt;chunking_config.txt&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;The &lt;STRONG&gt;chunking_config.txt&lt;/STRONG&gt; file defines the tables to be partitioned, identifies the primary key column for chunking, and specifies the number of chunks into which the data should be divided.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-14"&gt;&lt;STRONG&gt;&lt;U&gt;table_list.txt&lt;/U&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Use table_list.txt as the input if you want a specific list of tables. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Steps to run the script&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Script Execution Log&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;The script log records tables copied, timestamps, and process stages.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-10"&gt;Performance Baseline&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;A test was run on a 32-core system with a 10 GB table (262,1440 rows) for ASE and SQL. Migration using SSMA took &lt;STRONG&gt;about 3 minutes&lt;/STRONG&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the BCP script with 10 chunks, the entire export and import finished in 1 minute 7 seconds. This demonstrates how parallelism and chunk-based processing greatly boost efficiency for large datasets. &lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-7"&gt;&lt;STRONG&gt;&lt;EM&gt;Disclaimer&lt;/EM&gt;&lt;/STRONG&gt;&lt;STRONG&gt;&lt;EM&gt;:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; These results are for illustration purposes only. Actual performance will vary depending on system hardware (CPU cores, memory, disk I/O), database configurations, network latency, and table structures. We recommend validating performance in dev/test to establish a baseline.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-15"&gt;General Recommendation&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Larger batch sizes (e.g., 10K–50K) can boost throughput if disk IOPS and memory are sufficient, as they lower commit overhead.&lt;/LI&gt;
&lt;LI&gt;More chunks increase parallelism and throughput if CPU resources are available; otherwise, they may cause contention when CPU usage is high.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-10"&gt;Monitor system’s CPU and IOPS:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;When the system has high idle CPU and low I/O wait, increasing both the number of chunks and the batch size is appropriate.&lt;/LI&gt;
&lt;LI&gt;If CPU load or I/O wait is high, reduce batch size or chunk count to avoid exhausting resources.&lt;/LI&gt;
&lt;LI&gt;This method aligns BCP operations with your system's existing capacity and performance characteristics. &lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H6&gt;&lt;U&gt;&lt;SPAN class="lia-text-color-10"&gt;&lt;STRONG&gt;Steps to Download the script&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/U&gt;&lt;BR /&gt;&lt;BR /&gt;Please send an email to the alias: &lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt; and we will send you the download link with instructions.&lt;BR /&gt;&lt;BR /&gt;&lt;/H6&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-10"&gt;What’s Next: Upcoming Enhancements to the Script&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Smart Chunking for Tables Without Unique Clustered Indexes&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Enable chunk-based export using any &lt;STRONG&gt;unique key column&lt;/STRONG&gt;, even if the table lacks a unique clustered index.&lt;/LI&gt;
&lt;LI&gt;This will extend chunking capabilities to a broader range of tables, ensuring better parallelization.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;Multi-Table Parallel BCP with Intelligent Chunking&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Introduce full &lt;STRONG&gt;parallel execution across multiple tables&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;If a table qualifies for chunking, its export/import will also run in parallel internally, delivering &lt;STRONG&gt;two-tier parallelism&lt;/STRONG&gt;: across and within tables.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;LOB Column Handling (TEXT, IMAGE, BINARY)&lt;/LI&gt;
&lt;UL&gt;
&lt;LI&gt;Add robust support for &lt;STRONG&gt;large object data types&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Include optimized handling strategies for exporting and importing tables with TEXT, IMAGE, or BINARY columns, ensuring data fidelity, and avoiding performance bottlenecks.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;SPAN class="lia-text-color-14"&gt;Feedback and Suggestions&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;If you have feedback or suggestions for improving this asset, please contact the Data SQL Ninja Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). &lt;BR /&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Thank you for your support!&lt;/P&gt;</description>
      <pubDate>Tue, 07 Oct 2025 14:33:30 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/optimized-data-transfer-from-sybase-ase-to-azure-sql-via-chunked/ba-p/4436624</guid>
      <dc:creator>Manish_Kumar_Pandey</dc:creator>
      <dc:date>2025-10-07T14:33:30Z</dc:date>
    </item>
    <item>
      <title>Temporal Table Replication in SQL Server: Common Barriers and Solutions</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/temporal-table-replication-in-sql-server-common-barriers-and/ba-p/4422032</link>
      <description>&lt;H3&gt;Introduction&lt;/H3&gt;
&lt;P&gt;Transactional replication is a SQL Server feature that copies and synchronizes data and database objects across servers. It generally begins with a snapshot of the publication database objects and data. After this initial snapshot, any data changes and schema modifications made at the Publisher are delivered to the Subscriber as they occur, typically in near real time. These data changes are applied to the Subscriber in the same order and within the same transaction boundaries as at the Publisher, maintaining transactional consistency within a publication&lt;/P&gt;
&lt;P&gt;Standard transactional replication in SQL Server does not provide support for system-versioned temporal tables. This constraint presents difficulties for organizations aiming to replicate historical data maintained in temporal columns, such as ValidFrom and ValidTo. The challenge persists even when system versioning is disabled, yet there remains a requirement to retain the original values within the target database.&lt;/P&gt;
&lt;H3&gt;Understanding Temporal Tables&lt;/H3&gt;
&lt;P&gt;System-versioned temporal tables are a specialized form of user table designed to retain a comprehensive record of all data modifications. These tables facilitate point-in-time analysis by automatically recording historical changes. Each temporal table contains two datetime2 period columns that specify the validity duration for each row. In addition to the current table, an associated history table preserves previous versions of rows whenever updates or deletions take place.&lt;/P&gt;
&lt;H3&gt;Scenario &amp;amp; Challenge&lt;/H3&gt;
&lt;P&gt;In one of the migration scenarios, the customer faced an issue where system versioning was disabled, but there was still a requirement to replicate data from the ValidFrom and ValidTo columns to the target database without modification. Although temporal tables are commonly used for auditing and historical analysis, replicating them within a transactional replication setup can present specific technical challenges:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;System managed period columns complicate schema compliance.&lt;/LI&gt;
&lt;LI&gt;Mismatch in ValidFrom and ValidTo columns across environments can compromise audit reliability.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;As transactional replication currently does not support temporal columns, we devised the following solution to address this requirement.&lt;/P&gt;
&lt;H3&gt;Common Error Example&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;When configuring replication for an article that includes a system-versioned temporal table, the setup process may encounter failures due to SQL Server limitations related to system-generated columns.&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI&gt;In certain situations where system versioning is disabled, it may still be necessary to replicate the initial values of the ValidFrom and ValidTo period columns on the target system. However, during the configuration of transactional replication, the snapshot application process can fail on these columns, resulting in the following error:&lt;/LI&gt;
&lt;/UL&gt;
&lt;H6 class="lia-indent-padding-left-60px"&gt;&lt;STRONG&gt;Error message:&lt;/STRONG&gt;&lt;/H6&gt;
&lt;img /&gt;
&lt;P class="lia-clear-both"&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;This issue arises because SQL Server considers these columns system-generated and restricts direct inserts, including during replication. The following workaround addresses this situation.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Workaround&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To successfully replicate temporal tables, follow these steps:&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;/U&gt;&amp;nbsp; This approach will work in case of scenarios when there is a scope of minimal downtime.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Predefine Table Schema on Target&lt;/STRONG&gt;: Ensure that the source table schema exists on the target and matches with the source schema.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Disable System Versioning Temporarily&lt;/STRONG&gt;: Before configuring replication, disable system versioning on the temporal table. This allows replication to treat it like a regular table.&lt;/LI&gt;
&lt;/OL&gt;
&lt;LI-CODE lang="sql"&gt;ALTER TABLE [dbo].[Department] SET (SYSTEM_VERSIONING = OFF);&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; 3. When you &lt;STRONG&gt;set SYSTEM_VERSIONING = OFF and don't drop the SYSTEM_TIME period&lt;/STRONG&gt;, the system continues to update the period columns for every insert and update operation. Use the below script to remove the period for system time.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER TABLE dbo.Department DROP PERIOD FOR SYSTEM_TIME;&lt;/LI-CODE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;4. After this step, we can use the below script step by step to configure replication.&lt;/P&gt;
&lt;P class="lia-align-justify lia-indent-padding-left-30px"&gt;&lt;STRONG&gt;Replication Setup Steps&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Set a replication database option for the specified database. This stored procedure is executed at the Publisher or Subscriber on any database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;use master
GO

exec sp_replicationdboption 
@dbname = N'SourceDBNAme', 
@optname = N'publish', 
@value = N'true'
GO&lt;/LI-CODE&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI&gt;Create a transactional publication. This stored procedure is executed at the Publisher on the publication database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;use [SourceDBName]
GO

exec sp_addpublication 
@publication = N'PublicationName', 
@description = N'Transactional Replication publication of database',
@sync_method = N'concurrent',
@retention = 0, 
@allow_push = N'true', 
@allow_pull = N'true', 
@allow_anonymous = N'true', 
@enabled_for_internet = N'false',
@SnapShot_in_defaultfolder = N'true', 
@compress_snapshot = N'false', 
@ftp_port = 21,
@allow_subscription_copy = N'false', 
@add_to_active_directory = N'false',
@repl_freq = N'continuous', 
@status = N'active', 
@independent_agent = N'true', 
@immediate_sync = N'true', 
@allow_sync_tran = N'false', 
@allow_queued_tran = N'false',
@allow_dts = N'false', 
@replicate_ddl = 1, 
@allow_initialize_from_backup = N'false', 
@enabled_for_p2p = N'false', 
@enabled_for_het_sub = N'false'
GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create the Snapshot Agent for the specified publication. This stored procedure is executed at the Publisher on the publication database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;use [SourceDBName]
GO

exec sp_addpublication_snapshot
@publication = N'PublicationName',
@frequency_type = 1,
@frequency_interval = 1,
@frequency_relative_interval = 1, 
@frequency_recurrence_factor = 0, 
@frequency_subday = 8, 
@frequency_subday_interval = 1, 
@active_start_time_of_day = 0,
@active_end_time_of_day = 235959,
@active_start_date = 0, 
@active_end_date = 0,
@publisher_security_mode = 0, 
@job_login = N'',
​@job_password = N'', 
@publisher_login = N'', 
@publisher_password = N''&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Create an article and add it to a publication. This stored procedure is executed at the Publisher on the publication database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;use [SourceDBName]
GO

exec sp_addarticle 
@publication = N'PublicationName', 
@article = N'ArticleName',
@source_owner = N'Source Schema Name', 
@source_object = N'SourceTableName',
@type = N'logbased',
@description = null,
@creation_script = null,
@pre_creation_cmd = N'truncate', 
@schema_option = 0x000000000803509F, 
@identityrangemanagementoption = N'manual', 
@destination_table = N'Destination Table Name',
@destination_owner = N'Destination Schema Name',
@vertical_partition = N'false',
@ins_cmd = N'CALL sp_MSins_dboEmployee',
​@del_cmd = N'CALL sp_MSdel_dboEmployee', 
@upd_cmd = N'SCALL sp_MSupd_dboEmployee'
GO&lt;/LI-CODE&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;UL&gt;
&lt;LI class="lia-align-justify"&gt;Add a subscription to a publication and set the Subscriber status. This stored procedure is executed at the Publisher on the publication database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;use [SourceDBName]
GO

exec sp_addsubscription 
@publication = N'PublicationNAme', 
@subscriber = N'Azure SQL DB Server NAme', 
@destination_db = N'Target DB Name', 
@subscription_type = N'Push', 
@sync_type = N'automatic', 
@article = N'all', 
@update​_mode = N'read only', 
@subscriber_type = 0
GO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Add a new scheduled agent job used to synchronize a push subscription to a transactional publication. This stored procedure is executed at the Publisher on the publication database.&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;Use [SourceDBNAme]
GO

exec sp_addpushsubscription_agent 
@publication = N'PublicationNAme', 
@subscriber = N'Azure SQL DB Server NAme', 
@subscriber_db = N'Target DB Name', ​
@job_login = N'', ​
@job_password = null, 
@subscriber_security_mode = 0,
@subscriber_login = N'', 
@subscriber_password = null, 
@frequency_type = 64, 
@frequency_interval = 1, 
@frequency_relative_interval = 1, 
@frequency_recurrence_factor = 0, 
@frequency_subday = 4, 
@frequency_subday_interval = 5, 
@active_start_time_of_day = 0, 
@active_end_time_of_day = 235959, 
@active_start_date = 0, 
@active_end_date = 0, 
@dts_package_location = N'Distributor'
GO&lt;/LI-CODE&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;5. Once you have performed all the above steps and completed the data migration on target database you need to stop/delete the replication and again add period for system_time on the target table and enable system versioning.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;ALTER TABLE dbo.Department ADD PERIOD FOR SYSTEM_TIME(&amp;lt;ValidFrom&amp;gt;,&amp;lt;ValidTo&amp;gt;);

ALTER TABLE [dbo].[Department] SET (SYSTEM_VERSIONING = ON);&lt;/LI-CODE&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Note&lt;/STRONG&gt;&lt;/U&gt;:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &amp;lt;ValidFrom&amp;gt; and &amp;lt;ValidTo&amp;gt; columns are datetime2 columns defined as PERIOD FOR SYSTEM_TIME, using GENERATED ALWAYS AS ROW START and ROW END. Request to refer the period column names you have created while creating the temporal table and use the same while adding the period columns in the above script.&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;Conclusion&lt;/H3&gt;
&lt;P&gt;Migrating temporal tables within a transactional replication environment involves managing system-versioned features appropriately. Temporarily disabling system versioning and removing the SYSTEM_TIME period allows for adherence to schema requirements and facilitates data replication. After completing replication on the target platform, re-enabling system versioning reinstates temporal table functionality while maintaining data integrity.&lt;/P&gt;
&lt;P&gt;This workaround ensures that your replication strategy remains robust while preserving the audit trail and historical insights offered by temporal tables.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jul 2025 18:06:14 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/temporal-table-replication-in-sql-server-common-barriers-and/ba-p/4422032</guid>
      <dc:creator>Sonali_Solanki</dc:creator>
      <dc:date>2025-07-18T18:06:14Z</dc:date>
    </item>
    <item>
      <title>Seamless Online Homogeneous SQL Family Migration via Azure Data Factory using SQL CDC</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/seamless-online-homogeneous-sql-family-migration-via-azure-data/ba-p/4376314</link>
      <description>&lt;P&gt;Migrating data across SQL platforms, be it SQL Server, Azure SQL Database, Managed Instance, or SQL Server on IaaS, often involves operational complexity and potential downtime. Azure Data Factory (ADF) removes those barriers by enabling seamless, logical data movement across these services in either direction. Whether using SQL Change Data Capture (CDC) for near-zero downtime or traditional batch-based strategies, ADF ensures data consistency and operational continuity throughout the process.&lt;/P&gt;
&lt;P&gt;While physical data migration strategies remain valuable in many scenarios, this blog focuses on how ADF delivers a unified, scalable approach to logical database migration, in modernizing the database environments with minimal downtime.&lt;/P&gt;
&lt;H3&gt;Prerequisites&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;NOTE&lt;/STRONG&gt;:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please make sure to go through the limitations of CDC as this blog doesn't cover those.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server?view=sql-server-ver15#limitations" target="_blank" rel="noopener"&gt;SQL CDC Limitations&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/known-issues-and-errors-change-data-capture?view=sql-server-ver17" target="_blank" rel="noopener"&gt;Known Issues with CDC&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Before proceeding, please ensure you have the following prerequisites:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;An Azure subscription.&lt;/LI&gt;
&lt;LI&gt;Access to Azure Data Factory.&lt;/LI&gt;
&lt;LI&gt;Source and target databases, such as SQL Server, Azure SQL Database, Azure SQL MI etc.&lt;/LI&gt;
&lt;LI&gt;Enable &lt;STRONG&gt;Change Data Capture (CDC)&amp;nbsp;&lt;/STRONG&gt;on the source database for online migration.
&lt;UL&gt;
&lt;LI&gt;CDC captures changes like insert, update, and delete (DML) operations in the source database, allowing near real-time replication to a target database with minimal latency. To enable CDC, run:&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI-CODE lang="sql"&gt;-- Enable CDC on the database

EXEC sys.sp_cdc_enable_db;

-- Enable CDC on the source table 

EXEC sys.sp_cdc_enable_table

@source_schema = N'dbo',

@source_name = N'SourceTable',

 @role_name = NULL;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Azure Data Factory Provisioning &lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;ADF should be provisioned to provide a runtime environment for executing the pipeline.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Self-hosted Integration Runtime (SHIR)&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;SHIR is required to connect to the data source or destination which is not natively reachable by Azure (e.g., on-premises, private VNET, behind firewall).&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Linked Services&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;These should be created to connect to the source and target.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Datasets&amp;nbsp;&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;Datasets identify data within different data stores, such as tables, files, folders, and documents.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Performance Optimization&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;To speed up the process, primary keys, non-clustered indexes and constraints should be dropped on the target to reduce blocking/deadlocks and minimize resource contention.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Script Components&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Adf_source.sql&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;This script should be deployed on the source SQL Server. It will populate information in the &lt;STRONG&gt;dbo.data_extraction_config_adf &lt;/STRONG&gt;table to run Change Data Capture (CDC) and the initial load pipeline.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Adf_target.sql&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;This script should be deployed on the target SQL server. It will create stored procedures to help merge CDC changes and create objects necessary for running pipelines smoothly.&lt;/LI&gt;
&lt;/UL&gt;
&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Master tables&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;dbo.cdc__watermark__adf &lt;/STRONG&gt;contains information about the CDC tables for the last watermark.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;dbo.data__extraction__config__adf &lt;/STRONG&gt;contains information about the heap tables for initial load and CDC tables.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;dbo.sqlqueries &lt;/STRONG&gt;contains information about the clustered tables for initial load.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Let's deep dive into pipelines to handle different scenarios&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Pipeline 1: ClusteredTableMigration_Initial&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;This pipeline migrates data only from clustered tables.&lt;/LI&gt;
&lt;LI&gt;The &lt;STRONG&gt;dbo.sqlqueries&lt;/STRONG&gt; table automatically populates with clustered table info via the pipeline (Stored Procedure Activity).&lt;/LI&gt;
&lt;LI&gt;Ensure the source table schema matches the target table schema. To run the pipeline for specific tables, set the&amp;nbsp;&lt;STRONG&gt;IsActive &lt;/STRONG&gt;flag to 0 (inactive) or 1 (active) in the &lt;STRONG&gt;sqlqueries&lt;/STRONG&gt; table or add the table name in the Lookup activity.&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Pipeline 2: HeapTableMigration_Initial&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;This pipeline is designated for migrating heap tables. Prior to executing this pipeline, ensure that the heap table information has been added to the &lt;STRONG&gt;dbo.data__extraction__config__adf&lt;/STRONG&gt; table.&lt;/LI&gt;
&lt;LI&gt;The source table schema should be synchronized with the target table schema.&lt;/LI&gt;
&lt;LI&gt;To execute the pipeline for a set of tables, the &lt;STRONG&gt;IsActive&lt;/STRONG&gt; flag may be set to 0 (inactive) or 1 (active) in the dbo.&lt;STRONG&gt;data__extraction__config__adf&lt;/STRONG&gt; table.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;&lt;STRONG&gt;Pipeline 2: Heap Table Migration&lt;/STRONG&gt;&lt;/img&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Pipeline 3: CDCTableMigration&lt;/STRONG&gt;
&lt;UL&gt;
&lt;LI&gt;This pipeline facilitates the migration of clustered tables with Change Data Capture (CDC) enabled. Prior to execution, please ensure that the relevant information for these clustered tables is entered into the &lt;STRONG&gt;dbo.data__extraction__config__adf&lt;/STRONG&gt; table.&lt;/LI&gt;
&lt;LI&gt;Ensure the table schema is synchronized with the target schema, and that all tables intended for CDC synchronization possess a primary key and matching schema definition on the target system (excluding constraints and non-clustered indexes).&lt;/LI&gt;
&lt;LI&gt;To execute the pipeline for specific tables, the&amp;nbsp;&lt;STRONG&gt;IsActive&lt;/STRONG&gt; flag can be set to 0 (inactive) or 1 (active) in the &lt;STRONG&gt;dbo.data__extraction__config__adf&amp;nbsp;&lt;/STRONG&gt;table.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img&gt;&lt;STRONG&gt;Pipeline 3: CDC Table Migration&lt;/STRONG&gt;&lt;/img&gt;
&lt;P&gt;&lt;STRONG&gt;Schedule the Pipeline - For CDC load only&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create a trigger&lt;/STRONG&gt;: Create a trigger to schedule the pipeline to run at regular intervals (e.g., every 5-30 minutes based on application requirements) to capture and apply changes incrementally.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Monitor the pipeline&lt;/STRONG&gt;: Monitor the pipeline runs to verify that the data is being migrated and synchronized accurately.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Cutover and cleanup&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once the delta changes are synchronized fully on source and target database, cutover can be initiated by setting the source database to read-only and then changing the connection string of the application (or all apps, agent jobs etc. that are impacted) to use the new target database and perform cleanup by deleting the SPs in target database and stop the CDC, remove tables, and SPs in source database.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Conclusion&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Using Azure Data Factory allows for both online and offline data migration with minimal downtime, ensuring consistency between source and target databases. Change Data Capture enables near real-time data migration, suitable for environments requiring continuous data synchronization.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note &lt;/STRONG&gt;- To get ADF Pipelines and T-SQL Queries mentioned in this blog please reach out to our team alias : &lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Nov 2025 05:20:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/seamless-online-homogeneous-sql-family-migration-via-azure-data/ba-p/4376314</guid>
      <dc:creator>Vinod_Kumar_MSFT</dc:creator>
      <dc:date>2025-11-06T05:20:45Z</dc:date>
    </item>
    <item>
      <title>Hidden pitfalls of Temporary Tables in Oracle to PostgreSQL Migrations</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/hidden-pitfalls-of-temporary-tables-in-oracle-to-postgresql/ba-p/4416636</link>
      <description>&lt;P&gt;If you have been relying on Oracle Database as your primary system for analytics and the generation of MIS reports, you are probably familiar with the use of temporary tables within stored procedures. These temporary tables play an important role in managing intermediate data, performing complex calculations, and streamlining the overall data processing workflow.&amp;nbsp;Temporary tables help in handling large volumes of data, break down queries into manageable steps, and produce complex analytical reports efficiently.&lt;/P&gt;
&lt;P&gt;However, when organizations migrate these systems to Azure PostgreSQL, most automated code converters simply translate Oracle temporary tables into Azure PostgreSQL temporary tables without highlighting the key difference.&lt;/P&gt;
&lt;H1&gt;Understand the misconception&lt;/H1&gt;
&lt;P&gt;In Oracle the Global Temporary Table is a persistent schema object whose structure is permanent, but data is temporary. Internally, Oracle stores all data inserted into a GTT in the temporary tablespace, isolating it per session by using temporary segments that are dynamically allocated and cleaned up at the end of the session or transaction, depending on whether the table is defined with ON COMMIT DELETE ROWS or ON COMMIT PRESERVE ROWS. While the table metadata remains in the data dictionary, the data itself is never written to the redo logs.&lt;/P&gt;
&lt;P&gt;Oracle also introduced Private Temporary table in version 18C which has an extra option of ON COMMIT DROP DEFINITION which drops the table at transaction commit.&lt;/P&gt;
&lt;P&gt;Azure PostgreSQL too has a temporary table object that supports all the three commit clauses available in Oracle i.e. ON COMMIT DELETE ROWS, ON COMMIT PRESERVE ROWS and ON COMMIT DROP but the object is private to the session that created it with both its structure and data completely invisible to other sessions and the table itself is dropped automatically when the session ends.&lt;/P&gt;
&lt;P&gt;Oracle Global Temporary Tables have a permanent table definition accessible by all sessions but store data privately per session or transaction, whereas Azure PostgreSQL temporary tables exist only for the duration of a session and are dropped automatically at session end.&lt;/P&gt;
&lt;P&gt;At first glance, this difference might seem trivial, after all, you can simply add a CREATE TABLE statement in your code to recreate the temporary table at the start of every session. But what appears to be a small tweak can quickly spiral into a performance nightmare, overloading your system in ways you wouldn’t expect if not managed carefully.&lt;/P&gt;
&lt;P&gt;Whole Azure PostgreSQL is built on an MVCC architecture, which means even its internal tables and system catalogue tables retain deleted rows of dropped objects. If you relook at the key difference between temp tables, you will understand that every time a temp table is created and drop per session, it adds few rows and deletes it from many system catalogue tables. See example below&lt;/P&gt;
&lt;P&gt;Following is the output of Pgstattuple for three of the system tables.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Now I run a function a few times sequentially that joins multiple tables and writes the data into a temp table and return the response. You can see that there is a slight increase but nothing to be concerned about.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;But if the same function is called by 500 sessions concurrently, you can see that the increase is dramatic.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;There is also a marked increase in IOPS consumption as shown below&lt;/P&gt;
&lt;img /&gt;
&lt;H1&gt;Understand the impact&lt;/H1&gt;
&lt;P&gt;As seen above, system catalogue tables like pg_class, pg_attribute, and pg_type can grow rapidly in size as each session that creates and drops temporary tables leaves behind dead tuples in these catalogues, contributing to significant bloat. This accumulation happens because Azure PostgreSQL records metadata for every temporary table in the system catalogues, and when the tables are dropped (typically at session end), their metadata is simply marked as dead rather than immediately removed.&lt;/P&gt;
&lt;P&gt;In highly transactional environments, this bloat can escalate dramatically, sometimes increasing by hundreds or even thousands of times within just a few hours. Azure PostgreSQL relies heavily on its system catalogue during parsing, planning, and execution phases of every SQL statement.&lt;/P&gt;
&lt;P&gt;Also, every temp table created will try to utilize temp buffer to store the data but if the data is large and temp buffer is small then naturally the data is stored on disk. This frequent creation and deletion of files adds a lot of disk IO. Under normal conditions, this will be taken care of by the file management. However, when the system is under heavy load, this process can become a bottleneck and slow down even normal select statements.&lt;/P&gt;
&lt;P&gt;This catalogue bloat and frequent file and buffer management under heavy or repeated use of temporary tables will lead to high CPU consumption and will slow down existing users which will intern add more CPU load and quickly the system will get inundated and possibly crash.&lt;/P&gt;
&lt;P&gt;Below example shows almost 3 times increase in planning time with bloated system table as compared to without bloat&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;H1&gt;Conclusion&lt;/H1&gt;
&lt;P&gt;It's important to recognize that Azure PostgreSQL and Oracle implement temporary tables differently: Oracle's global temporary tables are persistent schema objects that do not add significant system load whereas Azure PostgreSQL's temporary tables are always session-specific and are dropped at session, this along with its MVCC architecture adds significant load on the system in certain situations. This fundamental difference means that if not handled properly it can cause database to crash.&lt;/P&gt;
&lt;P&gt;When migrating workloads from Oracle to Azure PostgreSQL, developers should carefully consider whether a temporary table is truly necessary, or if the requirement can be addressed more elegantly using alternatives like CTEs or views.&lt;/P&gt;
&lt;P&gt;In some scenarios, temporary tables are indispensable, for example, they provide an efficient way to store intermediate results, for simplifying complex query logic or for collecting data from ref-cursor and no workarounds fully match the flexibility of the temp tables for these use cases.&lt;/P&gt;
&lt;P&gt;If you can’t get rid of temp tables, then it’s absolutely necessary to have a robust alerting on system table bloat and having a custom job that does frequent vacuuming on these tables.&lt;/P&gt;
&lt;H4&gt;Feedback and suggestions&lt;/H4&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P&gt;Note: For additional information about migrating various source databases to Azure, see the&amp;nbsp;&lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jun 2025 00:59:58 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/hidden-pitfalls-of-temporary-tables-in-oracle-to-postgresql/ba-p/4416636</guid>
      <dc:creator>KapilSamant</dc:creator>
      <dc:date>2025-06-26T00:59:58Z</dc:date>
    </item>
    <item>
      <title>Ingesting Mainframe File System Data (EBCDIC) into SQL DB on Fabric Using OSS Cobrix</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/ingesting-mainframe-file-system-data-ebcdic-into-sql-db-on/ba-p/4402105</link>
      <description>&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189641243"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189646718"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Introduction&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;Mainframe/Midrange data is often stored in fixed-length format, where each record has a predetermined length, or variable-length format, where each record’s length may vary. The data is stored in binary format, using &lt;A class="lia-external-url" href="https://en.wikipedia.org/wiki/EBCDIC" target="_blank" rel="noopener"&gt;Extended Binary Coded Decimal Interchange Code (EBCDIC)&lt;/A&gt; encoding and the metadata for the EBCDIC files is stored in a copybook file. These EBCDIC encoded files store data uniquely based on its data type, which is vital Mainframe file system data optimal storage and performance.&lt;/P&gt;
&lt;P&gt;However, this presents a challenge when migrating data from Mainframe or Midrange systems to distributed systems. The data, originally stored in a format specific to Mainframe or Midrange systems, is not directly readable upon transfer to distributed systems. As distributed systems only understand code pages like &lt;A class="lia-external-url" href="https://en.wikipedia.org/wiki/ASCII" target="_blank" rel="noopener"&gt;American Standard Code for Information Interchange (ASCII)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;To make this data readable on a distributed system, we would need to do an EBCDIC to ASCII code page conversion. This conversion can be achieved in many ways. Few of them are&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Microsoft Host Integration Server, &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/host-integration-server/core/data-source-wizard-host-files-2#ma" target="_blank" rel="noopener"&gt;Host File client&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Logic app &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/connectors/integrate-host-files-ibm-mainframe" target="_blank" rel="noopener"&gt;IBM host File connector&lt;/A&gt;.&amp;nbsp; &amp;nbsp;Our detailed blog about it is &lt;A class="lia-internal-link lia-internal-url lia-internal-url-content-type-blog" href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/mainframe-ebcdic-data-file-to-ascii-conversion-using-azure-logic-app/3763750" target="_blank" rel="noopener" data-lia-auto-title="here" data-lia-auto-title-active="0"&gt;here&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Open Source (OSS) Libraries.&lt;/LI&gt;
&lt;LI&gt;Third-party ISV solutions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-10"&gt;Microsoft Host Intergration server&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;Microsoft Host Integration server (HIS) has a component named Host File Client (HFC). This particular component helps in converting Mainframe EBCDIC files to ASCII using a custom developed C# solution. &amp;nbsp;More details on this solution is provided in &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/host-integration-server/core/data-providers-for-host-files1" target="_blank" rel="noopener"&gt;HIS documentation page&lt;/A&gt;.&lt;/P&gt;
&lt;H1&gt;&lt;SPAN class="lia-text-color-10"&gt;Logic App Converter.&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;If you prefer to choose a cloud native solution, then you can try to use the Host File Connector in Azure Logic Apps. The detailed process has been documented in&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/mainframe-ebcdic-data-file-to-ascii-conversion-using-azure-logic-app/3763750" target="_blank" rel="noopener" data-lia-auto-title="this blog post" data-lia-auto-title-active="0"&gt;this blog post&lt;/A&gt;.&lt;/P&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc448428265"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Fabric (with Open-Source Libraries)&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/fundamentals/microsoft-fabric-overview" target="_blank" rel="noopener"&gt;Microsoft Fabric&lt;/A&gt; is an enterprise-ready, end-to-end analytics platform. It unifies data movement, data processing, ingestion, transformation, real-time event routing, and report building. It supports these capabilities with integrated services like Data Engineering, Data Factory, Data Science, Real-Time Intelligence, Data Warehouse, and Databases.&lt;BR /&gt;&lt;BR /&gt;There are many open-source solutions which can help in achieving conversion of mainframe data to ASCII. This will help in converting files using Fabric, Databricks, Synapse on Azure.&lt;/P&gt;
&lt;H5&gt;This blog will focus on the OSS option.&lt;/H5&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189646719"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Data Ingestion Architecture&lt;/SPAN&gt;&lt;/H1&gt;
&lt;img /&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc815687239"&gt;&lt;/A&gt;&lt;/H1&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189641244"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189646720"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Using OSS on Fabric&lt;BR /&gt;&lt;/SPAN&gt;&lt;/H1&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;There are multiple Open-source libraries that can be utilized for this data conversion. In this article we will dive deeper into one of these solutions -&amp;nbsp;&lt;A href="https://github.com/AbsaOSS/cobrix" target="_blank" rel="noopener"&gt;Cobrix &lt;/A&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://github.com/AbsaOSS/cobrix" target="_blank" rel="noopener"&gt;COBRIX&lt;/A&gt; is an open-source library built using scala and leverages the multithreaded process powered framework of spark. This helps in converting the file faster than compared to other single threaded processes. As this is multithreaded, it will need a pool of compute resources to achieve the conversion. Cobrix can run on spark environments like Azure Synapse, Databricks and Microsoft Fabric. We will dive deeper into how we can set up Cobrix on Microsoft Fabric.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189641245"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Download required Cobrix packages&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;We will have to first download the required Cobrix packages from the right sources. As Fabric has a particular runtime dependency, please make sure your download the right build for Scala as per the fabric environment that you setup. You will have to download two jars named Cobol-Parser_xx.xx.jar and Spark-cobol_xxx.xx.jar.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-10"&gt;Setup the Fabric Environment.&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;Login to &lt;A class="lia-external-url" href="https://fabric.microsoft.com/" target="_blank" rel="noopener"&gt;fabric.microsoft.com&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;Create a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/get-started/create-workspaces" target="_blank" rel="noopener"&gt;Fabric workspace&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Create an &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-engineering/create-and-use-environment" target="_blank" rel="noopener"&gt;Environment in the workspace&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Open the Environment and click on custom Libraries.&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Upload the two jars which were downloaded earlier.Once you have uploaded your custom library setup should look something like this.&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;Create a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-build-lakehouse" target="_blank" rel="noopener"&gt;new Lakehouse&lt;/A&gt;. Upload the cobol copybook file as well as the Mainframe Datafile in Binary to a particular location in the lakehouse. At the end of this step your lakehouse setup should look something of this kind.&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;For both these files, copy the &lt;STRONG&gt;Azure Blob File System Secure (&lt;/STRONG&gt;ABFSS) path by right clicking on the files. This link can be used to point to the file from the Spark notebook.&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class="lia-text-color-10"&gt;Create a new Fabric pipeline.&lt;/SPAN&gt;
&lt;UL&gt;
&lt;LI&gt;This pipeline will have two components, the first component will be a notebook, which will call the Cobrix framework to convert the file from EBCDIC to ASCII. Second piece of it will be a copy activity to copy the contents of the output file created in the notebook to a SQL DB on Fabric.&lt;/LI&gt;
&lt;LI&gt;Create a &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-engineering/how-to-use-notebook" target="_blank" rel="noopener"&gt;new Notebook&lt;/A&gt; . Attach the environment which you had created earlier to this notebook.&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;In the notebook cell, use can use this piece of code.&lt;/P&gt;
&lt;LI-CODE lang="scala"&gt;//Blob access var CopyBookName = "abfss://file1.cpy" var DataFileName = "abfss://file1.dat" var outputFileName = "abfss://output.txt" //Cobrix Converter Execution val cobolDataframe = spark .read .format("za.co.absa.cobrix.spark.cobol.source") .option("copybook", CopyBookName) .load(DataFileName) //Display DataFrame to view conversion results cobolDataframe.printSchema() cobolDataframe.show()&lt;/LI-CODE&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Once you have set the configuration properly, you are all set to run the notebook. And this will convert the file from EBCDIC to ASCII and store it to the Lakehouse.&lt;/LI&gt;
&lt;LI&gt;Add a Copy activity to the pipeline with &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-factory/connector-lakehouse-copy-activity#source" target="_blank" rel="noopener"&gt;File as Source&lt;/A&gt; and &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/fabric/data-factory/connector-sql-server-copy-activity#destination" target="_blank" rel="noopener"&gt;SQL server as destination&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;At this point in time, your pipeline should look something like this&lt;img /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;Once you run this pipeline, the Mainframe EBCDIC file will be converted to ASCII and then loaded into Fabric Native SQL DB table.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc1256804094"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Third-party ISV solutions.&lt;/SPAN&gt;&lt;/H1&gt;
&lt;UL&gt;
&lt;LI style="list-style-type: none;"&gt;
&lt;UL&gt;
&lt;LI&gt;There are many third-party ISV solutions which are available for EBCDIC to ASCII conversions. Please get int touch with us to help you get the right solution for your requirements.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189641247"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189646723"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Summary&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;EBCDIC to ASCII conversion is a critical piece of work during the data migration/modernization journey. Being able to do this with ease and accuracy will drive the success of data migration.&amp;nbsp; With this feature enabled in fabric, this opens up a new set of use cases like Mainframe report generation etc kind of use cases which are predominantly data warehouse driven.&lt;/P&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189641248"&gt;&lt;/A&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc189646724"&gt;&lt;/A&gt;&lt;SPAN class="lia-text-color-10"&gt;Feedback and suggestions&lt;/SPAN&gt;&lt;/H1&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please send an email to&amp;nbsp;&lt;A class="lia-external-url" href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;Database Platform Engineering Team&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Jun 2025 15:18:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/ingesting-mainframe-file-system-data-ebcdic-into-sql-db-on/ba-p/4402105</guid>
      <dc:creator>Ramanath_Nayak</dc:creator>
      <dc:date>2025-06-06T15:18:39Z</dc:date>
    </item>
    <item>
      <title>Optimizing Data Archival with Partitioning in Azure PostgreSQL for Oracle Migrations</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/optimizing-data-archival-with-partitioning-in-azure-postgresql/ba-p/4399268</link>
      <description>&lt;H3&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc194069203"&gt;&lt;/A&gt;Introduction&lt;/H3&gt;
&lt;P&gt;As enterprises migrate mission-critical workloads from heterogeneous databases like Oracle to Azure Database for PostgreSQL Flexible Server, managing large datasets while ensuring compliance with strict data retention policies becomes a key priority. Industries such as retail, telecommunications, transportation and logistics industry, among others, enforce stringent data retention requirements to safeguard customer information, operational efficiency and maintain service reliability. Failure to meet these standards can lead to increased risks, including data loss, inefficiencies, and potential non-compliance issues. Implementing a robust data retention and partitioning strategy in PostgreSQL helps organizations efficiently manage and archive historical data while optimizing performance.&lt;/P&gt;
&lt;P&gt;Azure Database for PostgreSQL Flexible Server provides powerful partitioning capabilities that allow organizations to manage large volumes of data effectively. By partitioning tables based on time intervals or other logical segments, we can improve query performance, automate archival processes, and ensure efficient data purging—all while maintaining referential integrity across complex schemas.&lt;/P&gt;
&lt;H4&gt;Migration Challenges&lt;/H4&gt;
&lt;P&gt;While both Azure Database for PostgreSQL Flexible Server and Oracle support partitioning, their approaches differ significantly. In Azure Database for PostgreSQL Flexible Server, partitions are not created automatically; each partition must be explicitly defined using the CREATE TABLE statement. This means that when setting up a partitioned table, each partition must be created separately, requiring careful planning and implementation.&lt;/P&gt;
&lt;P&gt;This blog explores best practices for implementing range partitioning in Azure Database for PostgreSQL Flexible Server, maintaining referential integrity across multiple levels, and leveraging Azure Blob Storage through the azure storage extension to efficiently archive partitioned data.&lt;/P&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc194069205"&gt;&lt;/A&gt;Implementing Partitioning with Example&lt;/H4&gt;
&lt;P&gt;In this example, we demonstrate a partitioning strategy within the test_part schema, where a parent table logs and a child table child_logs are structured using range partitioning with a monthly interval. Depending on specific business requirements, the partitioning strategy can also be adjusted to quarterly or yearly intervals to optimize storage and query performance.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Create and set schema for the session 
CREATE SCHEMA test_part;
SET SEARCH_PATH=test_part;

--Create a partitioned table
 CREATE TABLE logs ( 
    id integer not null, 
    log_date date not null, 
    message text  
) PARTITION BY RANGE (log_date); 

--Add primary key constraints in parent partition table
ALTER TABLE ONLY logs ADD primary key (id,log_date); 

--Define partition for each month
CREATE TABLE logs_2024_01 PARTITION OF logs FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');  
CREATE TABLE logs_2024_02 PARTITION OF logs FOR VALUES FROM ('2024-02-01') TO ('2024-03-01'); 
 
--Create a Child partition table
CREATE TABLE logs_child ( 
    id integer, 
    log_date date, 
    message text,
logs_parent_id integer
) PARTITION BY RANGE (log_date); 
 
--Add constraints 
ALTER TABLE ONLY logs_child ADD primary key (id,log_date);  
ALTER TABLE logs_child add constraint logs_child_fk foreign key(logs_parent_id,log_date) references logs(id,log_date) ON DELETE CASCADE;

 --Define a partition for each month
CREATE TABLE logs_child_2024_01 PARTITION OF logs_child FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');  
CREATE TABLE logs_child_2024_02 PARTITION OF logs_child FOR VALUES FROM ('2024-02-01') TO ('2024-03-01'); 
 
--Insert data into the parent partition table :
INSERT INTO logs (id,log_date, message) VALUES (1,'2024-01-15', 'Log message 1'); 
INSERT INTO logs (id,log_date, message) VALUES (11,'2024-01-15', 'Log message 1'); 
 
INSERT INTO logs (id,log_date, message) VALUES (2,'2024-02-15', 'Log message 2'); 
INSERT INTO logs (id,log_date, message) VALUES (22,'2024-02-15', 'Log message 2'); 

--Insert data into child partition table:
INSERT INTO logs_child values (1,'2024-01-15', 'Log message 1',1);  
INSERT INTO logs_child values (2,'2024-01-15', 'Log message 1',1); 
INSERT INTO logs_child values (5,'2024-02-15', 'Log message 2',22); 
INSERT INTO logs_child values (6,'2024-02-15', 'Log message 2',2); 

--Review data using Select 
SELECT * FROM logs;
SELECT * FROM logs_2024_01;
SELECT * FROM logs_2024_02;
SELECT * FROM logs_child_2024_01;
SELECT * FROM logs_child_2024_02;
&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Detach the partition:&lt;/H4&gt;
&lt;P&gt;While detaching partition follow below order, first detach the child table partition, remove FK and then detach the parent table partition.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Remove partitioning tables
ALTER TABLE logs_child DETACH PARTITION logs_child_2024_02;
ALTER TABLE logs_child_2024_02 DROP CONSTRAINT logs_child_fk;
ALTER TABLE logs DETACH PARTITION logs_2024_02; &lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;Archive the partition table in the Azure blob storage:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;The following steps demonstrate how to restore removed partition data in Azure Blob Storage using Microsoft Entra ID for authorization.&lt;/P&gt;
&lt;P&gt;Create an azure_storage extension by following steps provided in the link &lt;A href="https://learn.microsoft.com/en-us/azure/postgresql/extensions/how-to-allow-extensions?tabs=allow-extensions-portal" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Create an extension
CREATE EXTENSION azure_storage;
SET search_path=Azure_storage;

--Add account (Entra id or Authentication keys steps provided in reference document link here)
SELECT * FROM azure_storage.account_options_managed_identity('shaystorage','blob');
SELECT * FROM azure_storage.account_add('{
  "auth_type": "managed-identity",
  "account_name": "shayrgstorage",
  "account_type": "blob"
}');&lt;/LI-CODE&gt;&lt;img /&gt;&lt;LI-CODE lang="sql"&gt;SET SEARCH_PATH=test_part;
COPY test_part.logs_child_2024_02
TO 'https://shayrgstorage.blob.core.windows.net/pgtable/logs_child_2024_02.csv'
WITH (FORMAT 'csv', header);&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;H4&gt;View or load the archived partitioned table&lt;/H4&gt;
&lt;LI-CODE lang="sql"&gt;--After truncating data from the partition, view data from Azure storage .csv file. When archival data needed for ready only purpose
TRUNCATE TABLE test_part.logs_child_2024_02;
SELECT * FROM test_part.logs_child_2024_02;
SELECT * FROM azure_storage.blob_get
        ('shayrgstorage'
        ,'pgtable'
        ,'logs_child_2024_02.csv'
        ,NULL::test_part.logs_child_2024_02
        ,options := azure_storage.options_csv_get(delimiter := ',' , header := 'true')
        );&lt;/LI-CODE&gt;&lt;img /&gt;&lt;LI-CODE lang="sql"&gt;--Load data from .csv file to Azure database for PostgreSQL flexible server table. When need to restore the data for update purpose
TRUNCATE TABLE test_part.logs_child_2024_02;
INSERT INTO test_part.logs_child_2024_02 (id,log_date,message,logs_parent_id)
SELECT * FROM azure_storage.blob_get
        ('shayrgstorage'
        ,'pgtable'
        ,'logs_child_2024_02.csv'
        ,NULL::test_part.logs_child_2024_02
        ,options := azure_storage.options_csv_get(delimiter := ',' , header := 'true')
        ); 
&lt;/LI-CODE&gt;
&lt;H4&gt;Attach the partition table&lt;/H4&gt;
&lt;LI-CODE lang="sql"&gt;--Attach the partition table to view data in partition table for operation purpose
ALTER TABLE test_part.logs attach PARTITION test_part.logs_2024_02 for values from ('2024-02-01') TO ('2024-03-01'); 
ALTER TABLE test_part.logs_child attach PARTITION test_part.logs_child_2024_02 for values from ('2024-02-01') TO ('2024-03-01'); 
&lt;/LI-CODE&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc194069206"&gt;&lt;/A&gt;Alternative Data Archival Strategies Based on Business Requirements:&lt;/H4&gt;
&lt;UL&gt;
&lt;LI&gt;Deploy a lower sku Azure database for PostgreSQL Server such as Burstable or General Purpose service tier. Utilize the postgres_fdw extension to move the data between tables resides in different PostgreSQL databases or instances. Burstable servers are available with up to 64 TB of space. Automate database start/stop processes to minimize expenses when loading or extracting data.&lt;/LI&gt;
&lt;LI&gt;If the database size is relatively small, consider removing a partition from a partitioned table using the ALTER TABLE DETACH PARTITION command, converting it into a standalone table for easier archival.&lt;/LI&gt;
&lt;LI&gt;Use LTR options to retain the database backups for up to 10 years, depending on the business requirement and restore it when needed. For more information review &lt;A href="https://learn.microsoft.com/en-us/azure/backup/backup-azure-database-postgresql#configure-backup-on-azure-postgresql-databases" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;Utilize Azure Data Factory (ADF) Pipeline to move data into Azure storage and restore it as needed using automation scripts.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H4&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc194069207"&gt;&lt;/A&gt;Feedback and suggestions&lt;/H4&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Customer Success Engineering (Ninja) Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P&gt;Note: For additional information about migrating various source databases to Azure, see the &lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 07 Apr 2025 20:42:46 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/optimizing-data-archival-with-partitioning-in-azure-postgresql/ba-p/4399268</guid>
      <dc:creator>shaypatel</dc:creator>
      <dc:date>2025-04-07T20:42:46Z</dc:date>
    </item>
    <item>
      <title>Script Entra Logins and Users for Azure SQL DB Utility</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/script-entra-logins-and-users-for-azure-sql-db-utility/ba-p/4395933</link>
      <description>&lt;P&gt;Our team has a &lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/seamless-cross-tenant-migration-of-azure-sql-databases-without-modifying-connect/4356629" target="_blank" rel="noopener"&gt;blog&lt;/A&gt; on this site which describes the process of moving an Azure SQL DB from one tenant to another. When doing this, the Entra (was AAD) logins and users defined in the database will no longer be valid. You therefore need to recreate the logins and users in the SQL DB after the move to the new tenant. In some cases the logins and users will be completely different in the new tenant, but if the logins and users in the new tenant are the same, we have created a &lt;A href="https://www.microsoft.com/en-us/download/details.aspx?id=106382" target="_blank" rel="noopener"&gt;downloadable utility&lt;/A&gt; to enable customers to capture this information.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The application makes no changes to the SQL DB but instead produces a TSQL script with Drop and Create statements for Entra logins, users, user defined roles, role memberships and most object permissions from the source Azure SQL DB. The Drop statements will help you clean up objects in the database, while the Create statements should be carefully reviewed and can be selectively applied as required once the SQL DB is in the new tenant.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Using the utility in production should be approached cautiously, with sufficient testing to ensure all of the logins, users, roles and permissions are recreated.&lt;/P&gt;
&lt;H2&gt;Application Configuration&lt;/H2&gt;
&lt;P&gt;The application requires the .NET 8 runtime which, if necessary, can be installed here:&amp;nbsp;&lt;A href="https://dotnet.microsoft.com/en-us/download/dotnet/8.0" target="_blank" rel="noopener"&gt;Download .NET 8.0 (Linux, macOS, and Windows)&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The utility has no installation, just unzip all the files into a folder. The only configuration required is to change the connection string in the appsettings.json file. Once the connection string is set, run the utility executable.&lt;/P&gt;
&lt;H2&gt;Sample Execution&lt;/H2&gt;
&lt;P&gt;Below is snapshot of a sample execution of the utility.&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;Feedback and suggestions&lt;/H2&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Data SQL Engineering Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;</description>
      <pubDate>Wed, 02 Apr 2025 20:41:36 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/script-entra-logins-and-users-for-azure-sql-db-utility/ba-p/4395933</guid>
      <dc:creator>Mitch_van_Huuksloot</dc:creator>
      <dc:date>2025-04-02T20:41:36Z</dc:date>
    </item>
    <item>
      <title>Seamlessly Moving SQL Server Enabled by Azure Arc to a New Resource Group or Subscription</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/seamlessly-moving-sql-server-enabled-by-azure-arc-to-a-new/ba-p/4389656</link>
      <description>&lt;P data-start="90" data-end="417"&gt;In a dynamic enterprise environment, organizations often need to restructure their Azure resources for better cost allocation, governance, or compliance. For IT teams managing multiple SQL Server instances enabled by Azure Arc, a reorganization may require moving some instances to a different resource group or subscription.&lt;/P&gt;
&lt;P data-start="419" data-end="647"&gt;Moving Azure Arc-enabled SQL Server instances is generally straightforward, similar to relocating other Azure resources. However, it becomes more complex when dependent features like Best Practice Assessment (BPA) are enabled.&lt;/P&gt;
&lt;H3 data-start="649" data-end="692"&gt;Understanding the Migration Scenarios&lt;/H3&gt;
&lt;UL data-start="694" data-end="1181"&gt;
&lt;LI data-start="694" data-end="1028"&gt;&lt;STRONG data-start="696" data-end="730"&gt;Instances without BPA enabled:&lt;/STRONG&gt; These can be moved seamlessly using the Azure portal by following the official Microsoft documentation: &lt;EM data-start="835" data-end="1025"&gt;&lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/move-resources?view=sql-server-ver17" target="_blank" rel="noopener" data-start="836" data-end="1024"&gt;Move SQL Server enabled by Azure Arc resources to a new resource group or subscription&lt;/A&gt;&lt;/EM&gt;.&lt;/LI&gt;
&lt;LI data-start="1029" data-end="1181"&gt;&lt;STRONG data-start="1031" data-end="1062"&gt;Instances with BPA enabled:&lt;/STRONG&gt; Since BPA settings do not persist automatically after migration, additional steps are required to ensure continuity.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-start="1183" data-end="1207"&gt;Migration Approach&lt;/H3&gt;
&lt;P data-start="1209" data-end="1351"&gt;To ensure a smooth transition while preserving BPA configurations and updating Log Analytics Workspace (LAW) settings, the process involves:&lt;/P&gt;
&lt;OL data-start="1353" data-end="1688"&gt;
&lt;LI data-start="1353" data-end="1411"&gt;Identifying servers where the BPA feature is enabled.&lt;/LI&gt;
&lt;LI data-start="1412" data-end="1458"&gt;Disabling BPA before moving the resource.&lt;/LI&gt;
&lt;LI data-start="1459" data-end="1540"&gt;Migrating the SQL Server instance to the new resource group or subscription.&lt;/LI&gt;
&lt;LI data-start="1541" data-end="1587"&gt;Re-enabling BPA for the affected servers.&lt;/LI&gt;
&lt;LI data-start="1588" data-end="1688"&gt;Updating the Log Analytics Workspace configuration to align with the target subscription’s LAW.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3 data-start="1690" data-end="1718"&gt;Automating the Process&lt;/H3&gt;
&lt;P data-start="1720" data-end="1883"&gt;This blog provides a step-by-step PowerShell script to automate these tasks for at-scale migrations, minimizing manual effort and ensuring a seamless transition.&lt;/P&gt;
&lt;P data-start="1885" data-end="2050"&gt;&lt;STRONG data-start="1885" data-end="1910"&gt;Alternative Approach:&lt;/STRONG&gt; If automation isn't required, you can also use &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/sql/sql-server/azure-arc/assess?view=sql-server-ver16&amp;amp;tabs=portal#enable-best-practices-assessment-at-scale-by-using-azure-policy" target="_blank" rel="noopener"&gt;&lt;STRONG data-start="1958" data-end="1974"&gt;Azure Policy&lt;/STRONG&gt; to enable or disable BPA&lt;/A&gt; and move Arc resources through the Azure portal.&lt;/P&gt;
&lt;P data-start="2052" data-end="2225"&gt;By leveraging either automation or Azure-native tools, organizations can efficiently manage Azure Arc-enabled SQL Server migrations while maintaining their configurations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Steps to Migrate SQL Server Enabled by Azure Arc to a New Resource Group or Subscription&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1: Open PowerShell as Administrator&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Click on &lt;STRONG&gt;Start&lt;/STRONG&gt;, search for &lt;STRONG&gt;PowerShell ISE&lt;/STRONG&gt; or &lt;STRONG&gt;PowerShell&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Right-click and select &lt;STRONG&gt;Run as Administrator&lt;/STRONG&gt; to ensure the necessary permissions.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2: Provide Input Parameters&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Define the &lt;STRONG&gt;source and destination subscription IDs&lt;/STRONG&gt;, &lt;STRONG&gt;resource group names&lt;/STRONG&gt;, and &lt;STRONG&gt;Log Analytics Workspace details&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;Double-check that all values are correctly set before executing the script – “&lt;STRONG&gt;MoveArcResourcesAcrossSubscriptionOrRG.ps1&lt;/STRONG&gt;”.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3: Connect to Azure and Set Subscription Context&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Log in to your Azure account when prompted to authenticate the device or application.&lt;/LI&gt;
&lt;LI&gt;The script will set the context to access and manage the SQL Server instances based on the input parameters provided.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4: Validate the Migration&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Once the script execution is complete, validate the output to confirm that the resource move was successful.&lt;/LI&gt;
&lt;LI&gt;Check the Azure Portal to ensure that the SQL Server instances have been moved to the new resource group or subscription.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table class="lia-background-color-5 lia-border-color-20 lia-border-style-groove" border="1" style="width: 100%; border-width: 1px;"&gt;&lt;colgroup&gt;&lt;col style="width: 99.8529%" /&gt;&lt;/colgroup&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="lia-border-color-20"&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-style: italic;"&gt;&lt;EM&gt;The Child resources&lt;STRONG&gt; (SQL Servers instances and database)&lt;/STRONG&gt; associated with the Azure Arc-enabled machine may take additional time to fully update in the Azure Portal.&lt;/EM&gt;&lt;/LI&gt;
&lt;LI style="font-style: italic;"&gt;&lt;EM&gt;Allow &lt;STRONG&gt;at least one hour&lt;/STRONG&gt; for the move to reflect across all dependent services before performing a final validation.&lt;/EM&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By following these structured steps, organizations can efficiently migrate SQL Server enabled by Azure Arc while maintaining BPA configurations and updating necessary settings to ensure a&amp;nbsp;&lt;STRONG data-start="1643" data-end="1666"&gt;seamless transition&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;MoveArcResourcesAcrossSubscriptionOrRG.ps1&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="powershell"&gt;&amp;lt;# 
Name:     MoveArcResourcesAcrossSubscriptionOrRG.ps1
Purpose:  This script manages the prerequisites for disabling the BPA on each Arc server resource before initiating a resource move. 
          After the resource is successfully relocated to the new resource group (RG), the script then re-enables the BPA settings to their original state.       

Warranty: This script is provided on as "AS IS" basis and there are no warranties, express or implied, including, but not limited to implied warranties of merchantability or fitness for a particular purpose. USE AT YOUR OWN RISK. 
#&amp;gt;
#____________________________________________
# Input parameters
#____________________________________________
$SourceSubscriptionId='2xxxxxxxx-a798-4265-ab7d-d9xxxxx377'  # Set the source subscription ID
$DestinationSubscriptionId ='0xxxxxxxa-399c-4564-9f74-ffxxxxxx46' # Set the Destination subscription ID.
$SourceRgName='arcsqlprod_rg' # Set the Source resource group name
$TargetRgName='arcsqldev_rg' # Set the Destination resource group name
$logAnalyticsWorkspaceName = 'devloganalyticsworkspace' # Set the Log Analytics Workspace in the destination subscription.

#__________________
#local Variables
#__________________
$global:ExcludedServerlist = @();$arcServers = @() ;$allResources = @();$global:ArcEnabledServerlist = @();

cls
#_________________________________________________
# Check if the Az module is installed &amp;amp; Imported
#_________________________________________________
Function LoadRequiredModules {

if (-not (Get-Module -Name Az) -and -not (Get-Module -ListAvailable -Name Az) -and -not (Get-Module -ListAvailable -Name Az.Accounts))  {
    # Install the Az module if not already installed
    Write-Host "[$(Get-Date)]: Installing the required Az module, please wait."
    Install-Module -Name Az -AllowClobber -Force -Scope CurrentUser -WarningAction SilentlyContinue

}
# Import the Az module
Write-Host "[$(Get-Date)]: Importing the required Az module, please wait."
Import-Module Az.Accounts

Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope CurrentUser -Force 
Connect-AzAccount -Subscription $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null

}


#____________________________________________________________________
# Module to verify the existence of the destination resource group 
#____________________________________________________________________
function CheckDestinationResourceGroup {
Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue| Out-Null
$destinationRG = Get-AzResourceGroup  -Name $TargetRgName -ErrorAction SilentlyContinue
if (-not $destinationRG) {    
    Write-Host "[$(Get-Date)]: The destination resource group [$TargetRgName] does not exist." -BackgroundColor Yellow -ForegroundColor Red
    return
}
else { Write-Host "[$(Get-Date)]: The destination resource group [$TargetRgName] exists."
}
}

#____________________________________________________________________
# Module to verify the existence of Log Analytics Workspace name.
#____________________________________________________________________
function CheckLogAnalyticsWorkspace {
Set-AzContext     -subscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null *&amp;gt;$null
$LAW = Get-AzOperationalInsightsWorkspace | Where-Object { $_.Name -eq $logAnalyticsWorkspaceName }
if (-not $LAW) {
    Write-Host "[$(Get-Date)]: Log Analytics Workspace [$logAnalyticsWorkspaceName] does not exist in [Subscription:$($DestinationSubscription.Name), ResourceGroup:$TargetRgName]." -BackgroundColor Yellow -ForegroundColor Black
    $userInput = Read-Host "Would you like to create a new Log Analytics Workspace? Press any key to create and continue or [N or 0] to stop the execution"

    if ($userInput -ieq 'N' -or $userInput -ieq 0) {
        Write-Host "[$(Get-Date)]: Execution stopped." -ForegroundColor Red
        EXIT;
    } else {
        Write-Host "[$(Get-Date)]: Proceeding to create a new Log Analytics Workspace. Please wait.."
        try{
            $NewLAW=New-AzOperationalInsightsWorkspace -ResourceGroupName $TargetRgName -Name $logAnalyticsWorkspaceName -Location (Get-AzResourceGroup  -Name $TargetRgName).Location
            if ($NewLAW) {
            Write-Host "[$(Get-Date)]: Successfully created a new Log Analytics Workspace:`n"("_" * 160)
            Write-Host " Resource ID: $($NewLAW.ResourceId)"
            Write-Host " Location   : $($NewLAW.Location)`n"("_" * 160)
        }
        }
        catch{
            Write-Host "[$(Get-Date)]: An error occurred while creating the Log Analytics Workspace." -ForegroundColor Red
            Write-Host "Error: $($_.Exception.Message)" -ForegroundColor Red
        }
    }
} else {
    Write-Host "[$(Get-Date)]: Log Analytics Workspace [$logAnalyticsWorkspaceName] found."
}

Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null
}



#____________________________________________
# Function to check the status of BPA
#____________________________________________
function Get-BPAStatus {
    param ( [string]$machineID,[string]$mode)

$subscriptionId = ($machineID -split '/')[2]
$resourceGroup = ($machineID -split '/')[4]
$machineName = ($machineID -split '/')[8]

$MachineState=(Get-AzConnectedMachine  -ResourceGroupName $resourceGroup  -Name $machineName).Status
if ($MachineState -eq 'Disconnected') {
    Write-Host "[$(Get-Date)]: The Azure Arc machine [$($machineName)] is currently offline or disconnected, which will block the movement of resources or the enabling/disabling of features." -BackgroundColor Yellow -ForegroundColor Red
    return 'DISCONNECTED';
    }
    else
    {
        $extn=$null;
        $extn= Get-AzConnectedMachineExtension -ResourceGroupName $resourceGroup -MachineName $machineName | where Name -Like 'WindowsAgent.SqlServer' | select ProvisioningState

        if ($extn -eq $null) {
            Write-Host "[$(Get-Date)]: SQL Server Extension is not installed on the Machine : [$($machineName)]." -BackgroundColor Green -ForegroundColor black
            return 'DISCONNECTED-MISSING-SQLExtention';
            }
        elseif (($extn.ProvisioningState -eq 'Succeeded') -or ($extn.ProvisioningState -eq 'Updating'))
        {

        $uri = "https://edge.management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.HybridCompute/machines/$($machineName)/extensions/WindowsAgent.SqlServer`?api-version=2022-03-10"
        try{
        $token = (Get-AzAccessToken -ResourceUrl https://management.azure.com/ -AsSecureString -WarningAction SilentlyContinue).Token}
        catch {
               Write-Error "Failed to retrieve the Azure Access Token. Error: $_"
               }
        $headers = @{Authorization = "Bearer "+[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token))}

        $retryCount = 0
        while ($retryCount -lt 4) 
        {
            try{
            $response = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers}
            catch {
               Write-Error "Error occurs during the REST API request. Error: $_"
               }
            $bpaconfig=$response.properties.settings.AssessmentSettings.Enable 
            if ( -not [string]::IsNullOrEmpty($response) -and -not [string]::IsNullOrEmpty($bpaconfig) ) {
            break}
            else{
            if ($retryCount -eq 0){ Write-Host "[$(Get-Date)]: Waiting to get the BPA status after the recent update "  -NoNewline } else {Write-Host "....reattempt in 15 seconds."}
            Start-Sleep -Seconds 15
            $retryCount++
            }
        }

        $global:licenceType=$response.properties.settings.LicenseType

        if ($mode -eq "Validate"){
        return $bpaconfig}

        if ( [string]::IsNullOrEmpty($global:licenceType) -or $LicenseType -eq "LicenseOnly") {
                
                        switch ($global:licenceType) {
                            $null         {  Write-Host "[$(Get-Date)]: License Type is NOT configured for machine [$($arcMachine.Name)]."  }
                            "LicenseOnly" {  Write-Host "[$(Get-Date)]: Best Practices Assessment is not supported on license type '$LicenseType' for machine [$($arcMachine.Name)]." }
                            default       {  Write-Host "[$(Get-Date)]: Unknown License Type for machine [$($arcMachine.Name)]." }
                        }
                     $global:skippedmachine += $arcMachine.Name}

            switch ($bpaconfig) {
                $false { Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Disabled] on Machine: [$($machineName)]"}
                $true  { Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Enabled] on Machine: [$($machineName)]"  }
                default{ Write-Host "[$(Get-Date)]: SQL Best Practice Assessment is [Not Configured] on Machine: [$($machineName)]" }
        
            }

    

        return $bpaconfig;
   
}
else
{
 Write-Host "[$(Get-Date)]: SQL Server Extension is in [$($extn.ProvisioningState)] state on the Machine : [$($machineName)]. Cannot update the BPA configuration." -BackgroundColor Yellow -ForegroundColor black
 return 'DISCONNECTED-Unknown-SQLExtention';
}

}
}

#__________________________________________________________
# Function to Enable/Disable BPA for each machine
#__________________________________________________________
function Set-BPAConfiguration {
    param (
        [string]$machineID,
        [string]$valuetoset
    )

$subscriptionId = ($machineID -split '/')[2]
$resourceGroup = ($machineID -split '/')[4]
$machineName = ($machineID -split '/')[8] 

    Write-Host "[$(Get-Date)]: $($(($valuetoset).Substring(0, $valuetoset.Length - 1)) + 'ing') BPA for machine [$($machineName)]...."
    $setvalue = if ($valuetoset -eq "Enable") { $true } else { $false }

$uri = "https://edge.management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.HybridCompute/machines/$($machineName)/extensions/WindowsAgent.SqlServer?api-version=2022-03-10"
$token = (Get-AzAccessToken -ResourceUrl https://management.azure.com/ -AsSecureString -WarningAction SilentlyContinue).Token
$headers = @{Authorization = "Bearer " + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token))}
# Get the current response to inspect the existing values
$response = Invoke-RestMethod -Uri $uri -Method Get -Headers $headers

if ($setvalue -eq $true){

if ([string]::IsNullOrEmpty($response.properties.settings.AssessmentSettings))
{
$response.properties.settings | Add-Member -MemberType NoteProperty -Name "AssessmentSettings" -Value @{}
}

$response.properties.settings.AssessmentSettings.Enable =$true
$response.properties.settings.AssessmentSettings.WorkspaceResourceId=$LAW.ResourceId
$response.properties.settings.AssessmentSettings.WorkspaceLocation=$LAW.Location
$response.properties.settings.AssessmentSettings.ResourceNamePrefix=$null
$response.properties.settings.AssessmentSettings.RunImmediately=$true
$response.properties.settings.AssessmentSettings.schedule = @{
                                dayOfWeek = "Sunday"
                                Enable = $true
                                monthlyOccurrence = $null
                                StartDate = $null
                                startTime = "00:00"
                                WeeklyInterval = 1
                            }
}
else
{
$response.properties.settings.AssessmentSettings.Enable =$false
$response.properties.settings.AssessmentSettings.WorkspaceResourceId=$null
$response.properties.settings.AssessmentSettings.WorkspaceLocation=$null
}

$jsonPayload = $response| ConvertTo-Json -Depth 10
#$jsonPayload  #for debug

# Prepare the PATCH request headers
$headers = @{
    Authorization = "Bearer " + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($token))
    "Content-Type" = "application/json"  # Specify content type as JSON
}
# Make the PATCH request
try {
$response = Invoke-RestMethod -Uri $uri -Method Patch -Headers $headers -Body $jsonPayload
Write-Host "[$(Get-Date)]: Successfully submitted the request to [$($valuetoset)] Best Practices Assessment for machine  [$($machineName)]."

} catch {
    # Handle the error
    Write-Host "[$(Get-Date)]: An error occurred while $($BPAtargetstate +'ing') BPA for [$($arcMachine.Name)]: $_"
}

Start-Sleep -Seconds 10
#Valdate after the change
$CurrentState=Get-BPAStatus -machineID $machineID  -mode "Validate"

switch ($CurrentState) {
    $true  { $state = "Enabled" }
    $false { $state = "Disabled" }
    default { $state = $CurrentState }  # Default case
}

if ($setvalue -eq $CurrentState){
Write-Host "[$(Get-Date)]: Successfully [$state] Best Practices Assessment for machine [$($machineName)]."  
return $setvalue
}
else
{
Write-Host "[$(Get-Date)]: Updating the BPA configuration for machine [$($machineName)] has failed. The CurrentState is [$CurrentState]"  -BackgroundColor Yellow -ForegroundColor Red
return "Error-$CurrentState"}

}

#__________________________________________________________
# Module to make sure that BPA is disable for each machine
#__________________________________________________________
Function Ensure-BPA-IsDisabled {

    $arcMachines = Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "Microsoft.HybridCompute/machines"
    Write-Host "[$(Get-Date)]: A total of $($arcMachines.Count) Azure Arc machine(s) found." -BackgroundColor Green -ForegroundColor Black
    
    foreach ($arcMachine in $arcMachines) {
    
    Write-Host "[$(Get-Date)]: Validating the configuration for Azure Arc machine :[$($arcMachine.Name)]" 
    $MachineState=(Get-AzConnectedMachine  -ResourceGroupName $SourceRgName  -Name $arcMachine.Name).Status
    if ($MachineState -eq 'Disconnected') {
    Write-Host "[$(Get-Date)]: The Azure Arc machine [$($arcMachine.Name)] is currently OFFLINE/DISCONNECTED, Cannot update the BPA configuration. This will also prevent the resource movement of this/child resource(s)." -BackgroundColor Yellow -ForegroundColor Red
    }
    else
    {
        $extn=$null;
        $extn= Get-AzConnectedMachineExtension -ResourceGroupName $SourceRgName -MachineName $arcMachine.Name | where Name -Like 'WindowsAgent.SqlServer' | select ProvisioningState

        if ($extn -eq $null) {
            Write-Host "[$(Get-Date)]: SQL Server Extension is not installed on the Machine : [$($arcMachine.Name)]." -BackgroundColor Green -ForegroundColor black}
        elseif ($extn.ProvisioningState -eq 'Succeeded')
        {

        $status = Get-BPAStatus -machineID $($arcMachine.ResourceId) -mode "Validate"
            if ($status -eq $true) {
                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is set to : [$($status.ToString().ToUpper())] for Machine:[$($arcMachine.Name)]"
                Write-Host "[$(Get-Date)]: Attempting to DISABLE SQL Best Practice AssessmentSettings for Machine:[$($arcMachine.Name)]" -BackgroundColor White -ForegroundColor Black

                $status= Set-BPAConfiguration -machineID $($arcMachine.ResourceId) -valuetoset 'Disable'
                #$status= Get-BPAStatus -machineID $($arcMachine.ResourceId)  -mode "Validate"
                

                if ($status -eq $false){
                    Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is now set to : [$($status.ToString().ToUpper())] for Machine:[$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] and added to the re-enablement list."
                    $global:ArcEnabledServerlist =$global:ArcEnabledServerlist+$($arcMachine.ResourceId)
                    }

                else{
                    Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSetting for Machine:[$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] and added to the exclusion list."
                    $global:ExcludedServerlist+=$($arcMachine.Name) 
                }
              }
                else {
                      switch ($status) {
                            $null {
                                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is NOT configured on Machine: [$($arcMachine.Name)]"
                            }
                            $False {
                                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is already set to [$($status.ToString().ToUpper())] for Machine: [$($arcMachine.Name)]"
                            }
                            "Not-Configured" {
                                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is [$($status.ToString().ToUpper())]  for Machine: [$($arcMachine.Name)]"
                            }
                            default {
                                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is [Unknown] for Machine: [$($arcMachine.Name)]"
                                $global:ExcludedServerlist+=$($arcMachine.Name) 
                            }
                        }
                }           
        }
        else
            {
             Write-Host "[$(Get-Date)]: SQL Server Extension is in [$($extn.ProvisioningState)] state on the Machine : [$($arcMachine.Name)]. Cannot update the BPA configuration." -BackgroundColor Yellow -ForegroundColor Red
            
            }

        }
    }
}

#____________________________________________________________________
# Start the move resource operation
#____________________________________________________________________
Function move-Arc-machines{

    $arcServers= Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "microsoft.hybridcompute/machines"  | Where-Object { $_.Name -notin $global:ExcludedServerlist }

if ($arcServers.Count -gt 0)
{
    Write-Host "[$(Get-Date)]: Starting the move of Arc server resources. This process may take some time, so please wait until it is completed."
    if ($global:ExcludedServerlist) {
        Write-Host "[$(Get-Date)]: List of servers which are skipped for move due to failure in disabling BPA feature:" -ForegroundColor Yellow
        Write-Host $global:ExcludedServerlist -ForegroundColor Red -BackgroundColor Yellow
    } else {
            Write-Host "[$(Get-Date)]: Total resources considered for move : $($arcServers.Count)`n"
            $arcServers.ResourceID
 
            if($arcServers.Count -gt 0){
            Write-Host "`n[$(Get-Date)]: Starting the MOVE of Arc server resources. This process may take a few minutes, please do not close the window."

            Move-AzResource -DestinationSubscriptionId $DestinationSubscriptionId -DestinationResourceGroupName $TargetRgName -ResourceId $arcServers.ResourceId -Force
            Write-Host "[$(Get-Date)]: Initialization of the resource move has been successfully completed. Moving the child (SQL Server) resource(s) may take some time. Please check the Azure portal later."}

    }
}
else
{
   Write-Host "[$(Get-Date)]: No Arc Machines available for the move operation."

}
}


#____________________________________________________________________
# Check for remaining resources in the old resource group
#____________________________________________________________________
Function validate-after-MachineMove {

    $allResources = @();
    Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue | Out-Null
    $arcServers = Get-AzResource -ResourceGroupName $SourceRgName -ResourceType "microsoft.hybridcompute/machines" 
    $allResources += $arcServers

    if ($allResources) {
        Write-Host "[$(Get-Date)]: There are still [$($allResources.count)] resources in the old resource group '$SourceRgName':`n"
        $allResources.ResourceID
   
    } else {
            Write-Host "[$(Get-Date)]: No resources remaining in the old resource group '$SourceRgName'."
        
            if ($global:ArcEnabledServerlist.Count -gt 0) {
                Write-Host "[$(Get-Date)]: Enabling the BPA for [$($global:ArcEnabledServerlist.Count)] resource(s) on the target resource group."
                Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null

                $arcMachines=$global:ArcEnabledServerlist
                
                foreach ($arcMachine in $arcMachines) {
            
                  Write-Host "[$(Get-Date)]: Validating the BPA status for Machine:[$($arcMachine.Split('/')[-1])]"
                  $arcMachine = $arcMachine.Replace($SourceSubscriptionId, $DestinationSubscriptionId).Replace($SourceRgName, $TargetRgName)

                  $status = Get-BPAStatus  -machineID $($arcMachine)  -mode "Validate"
                  switch ($status) {
                                    $true            {Write-Host "[$(Get-Date)]: `nSQL Best Practice AssessmentSettings is already set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]"}
                                    "Not-Configured" {Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSettings for Machine: [$(($($arcMachine.ResourceId) -split '/')[($($arcMachine.ResourceId) -split '/').IndexOf('machines') + 1])] as it is not Configured" -BackgroundColor Yellow -ForegroundColor Red }
                                    $false           {
                                    Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]"
                                    Write-Host "[$(Get-Date)]: Attempting to ENABLE SQL Best Practice AssessmentSettings for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" -BackgroundColor White -ForegroundColor Black
        
                                        # Perform status update and check
                                        $status = Set-BPAConfiguration -machineID $($arcMachine) -valuetoset 'Enable'
                                        #$status = Get-BPAStatus  -machineID $($arcMachine)  -mode "Validate"

                                        switch ($status) {
                                            $true {
                                                $machineName = ($arcMachine.ResourceId -split '/')[($arcMachine.ResourceId -split '/').IndexOf('machines') + 1]
                                                Write-Host "[$(Get-Date)]: SQL Best Practice AssessmentSettings is now set to : [$($status.ToString().ToUpper())] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]"
                                            }

                                            $false {
                                                $machineName = ($arcMachine.ResourceId -split '/')[($arcMachine.ResourceId -split '/').IndexOf('machines') + 1]
                                                Write-Host "[$(Get-Date)]: Failed to update SQL Best Practice AssessmentSettings for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]"
                                                $global:ExcludedServerlist += $arcMachine.Name
                                            }
                                        }
                                    }
                                    "DISCONNECTED"  {Write-Host "[$(Get-Date)]: Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])] is in DISCONNECTED state, Skipping the BPA enablement" -BackgroundColor Red -ForegroundColor White}
                                    "DISCONNECTED-MISSING-SQLExtention" {Write-Host "[$(Get-Date)]: SQL Extension is missing for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])] , Skipping the BPA enablement" -BackgroundColor Red -ForegroundColor White}

                                default {Write-Host "[$(Get-Date)]: Unknown status value [$status] for Machine:[$(($arcMachine -split '/')[($arcMachine -split '/').IndexOf('machines') + 1])]" -BackgroundColor Red -ForegroundColor White}
                                }
                    }
                }
                else {Write-Host "[$(Get-Date)]: No machines found for BPA enablement."}
    }     
}

# Start capturing the output to the file
$outputFile = ([System.IO.Path]::Combine([System.IO.Path]::GetTempPath(), "MoveArcResourcesOutput_" + (Get-Date -Format "yyyy-MM-dd_HH.mm.ss") + '.txt'))
Start-Transcript -Path $outputFile &amp;gt; $null

#1. Load required modules
LoadRequiredModules

# Get subscription details for Source and Destination
Set-AzContext -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue | Out-Null
$SourceSubscription = Get-AzSubscription -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue

Set-AzContext -SubscriptionId $SourceSubscriptionId -WarningAction SilentlyContinue| Out-Null
$DestinationSubscription = Get-AzSubscription -SubscriptionId $DestinationSubscriptionId -WarningAction SilentlyContinue
Cls
# Display the details of inputparameters
Write-Host "[$(Get-Date)]: __________________Start of script_______________________________`n[$(Get-Date)]: Input Parameters considered for this execution:`n"
Write-Host "Source Subscription ID      : $SourceSubscriptionId ($($SourceSubscription.Name)`) `nDestination Subscription ID : $DestinationSubscriptionId ($($DestinationSubscription.Name)`) `nSource Resource Group Name  : $SourceRgName `nTarget Resource Group Name  : $TargetRgName `nLogAnalyticsWorkspaceName   : $logAnalyticsWorkspaceName `n"

#2. Check if both subscriptions are in the same tenant
if ($sourceSubscription.TenantId -ne $destinationSubscription.TenantId) {
    Write-Host "[$(Get-Date)]: Cannot move resource as the subscriptions are in different tenants."
} else {
    Write-Host "[$(Get-Date)]: Both subscriptions are in the same tenant. You can proceed with the move."

#3. Checks whether a specified destination resource group exists
CheckDestinationResourceGroup

#4. Verifies the existence and configuration of a Log Analytics Workspace on the Target subscription
CheckLogAnalyticsWorkspace

#5. Retrieves the current status of the Best Practice Analyzer (BPA) for a Arc machine and disables it to prepare for resource move
Ensure-BPA-IsDisabled

#6. Initialize the resource move
move-Arc-machines

#7. Validate the resource move
validate-after-MachineMove

}

Write-Host "[$(Get-Date)]: __________________END of script_______________________________`n`n"

# Stop capturing output
Stop-Transcript &amp;gt; $null
Start-Process "notepad.exe" -ArgumentList $outputFile&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Sample output&lt;/STRONG&gt;&lt;/H3&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Sep 2025 22:32:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/seamlessly-moving-sql-server-enabled-by-azure-arc-to-a-new/ba-p/4389656</guid>
      <dc:creator>Raghavendra_Srinivasan</dc:creator>
      <dc:date>2025-09-24T22:32:15Z</dc:date>
    </item>
    <item>
      <title>Best practices for modernizing large-scale Text-Search from legacy data systems to Azure SQL</title>
      <link>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/best-practices-for-modernizing-large-scale-text-search-from/ba-p/4377782</link>
      <description>&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc130894407"&gt;&lt;/A&gt;Introduction&lt;/H1&gt;
&lt;P&gt;Text-search is an essential aspect of contemporary data management, enabling users to effectively extract information from natural-language text. This technology allows for the retrieval of specific details from a text corpus, providing insights that would be unattainable through conventional search methods.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Most modern databases include text-search capabilities by default, while legacy databases and data systems typically do not. In legacy systems, users may need to configure these features separately or use external services to implement text-search functions. SQL Server natively supports Full-Text search (&lt;EM&gt;and has been available from SQL Server 2005 onwards&lt;/EM&gt;), which removes the requirement for additional component installations and thereby negates the need for any external data movement.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The effectiveness of any feature is contingent upon proper design and implementation. For handling text-search on large volume tables and VLDBs (i.e. TBs or PBs of data), adhering to established techniques and best practices ensures optimal performance and manageability. This article will specifically focus on the performance considerations and optimization techniques required for implementing Full-Text Search functionality on Azure SQL Database. We will discuss a use case that demonstrates how to optimize full-text search innovatively to deliver quick and efficient results.&lt;/P&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc190279201"&gt;&lt;/A&gt;Modernizing legacy workloads&lt;/H1&gt;
&lt;P&gt;This article delves into the text search feature in Azure SQL and its detailed implementation for a specific use-case. This is a common scenario that customers may encounter during migration or modernization from legacy mainframe, midrange or on-premises x86 systems. For instance, certain legacy systems employ an OLTP database for standard searches and a separate server specifically catering to the text-search use case. This design necessitates regular data transfers between these two systems, potentially leading to instances where customers cannot access up-to-date information and must accept staleness in their data processing.&lt;/P&gt;
&lt;P&gt;There have been cases where text search requests were submitted to backend systems as separate queries, queued to run after business hours in batch jobs, with results being returned to the end-user on the following business day. Such implementations were common in legacy systems and continue to impact businesses today. When modernizing such workloads, customers can adopt and utilize native features like Full-Text Search which enables functional benefits to end-users like providing them with results in real-time and allowing them to focus more on their core business activities.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below table shows how long these searches take when implemented through different technologies. The table volume considered here is ~5 billion rows&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1 solid rgb(35, 111, 161)px" style="border-width: 1 solid rgb(35, 111, 161)pxpx;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Technology&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Preferred Execution Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature utilized for Text-Search&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;End-to-End Timeframe&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Legacy Systems&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Nightly Batch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;LIKE or &lt;BR /&gt;External Text Search Server&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1 business day&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Cloud Database&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Micro-batch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;LIKE&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~ 20 mins&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure SQL DB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Real-time&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full Text Search&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;lt;1 sec&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt;&lt;/EM&gt; &lt;EM&gt;The performance increase shown here is not a direct result of utilizing Full Text Search. Though Full Text Search increases the speed of querying such data, the increase in overall performance is a consequence of the system design and usage of multiple complementary SQL features that fit this use-case.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc190279202"&gt;&lt;/A&gt;Scenario&lt;/H1&gt;
&lt;P&gt;Let's consider a use case where the SQL Database contains a substantially large table named &lt;EM&gt;CustomerInfo&lt;/EM&gt; with billions of rows and terabytes of data. This table includes columns such as Customer Name, Address, State, CustInfo etc. Assume that &lt;EM&gt;Custinfo&lt;/EM&gt; field encompasses free text / notes entered by customer service team for logging. Our use case involves retrieving rows from this &lt;EM&gt;Custinfo &lt;/EM&gt;column based on a partial search criteria provided by the end-user through an application (i.e. web page, mobile app, etc.). For example, the query could be to extract all the rows where&amp;nbsp;&lt;EM&gt;Custinfo&lt;/EM&gt; field has notes stating that ‘&lt;EM&gt;mobile&lt;/EM&gt;’ or ‘&lt;EM&gt;address&lt;/EM&gt;’ is updated. The search functionality should retrieve all corresponding records, whether the match is exact or partial, and sort them according to their proximity to the provided key words.&lt;/P&gt;
&lt;P&gt;Implementing a standard search using the LIKE operator on such a large dataset would be inefficient, and constructing the logic for specific word searches on a free-text character column is complex to build and maintain.&lt;/P&gt;
&lt;P&gt;Full-Text Search for this use-case involves applying sound design principles, as well as leveraging SQL Server native features such as partitioning and indexed views, which enhance operational efficiency and manageability.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt; &lt;/STRONG&gt;&lt;EM&gt;In context of this article, we are using Azure SQL DB Hyperscale, but implementation remains mostly the same for other SQL offerings as well.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc190279203"&gt;&lt;/A&gt;Technology Overview&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Table Partitioning&lt;/STRONG&gt; involves dividing a large table into smaller, more manageable segments called "partitions". Instead of storing all data in one extensive table, the data is separated into several smaller tables, each containing a portion of the data. Detailed information on Azure SQL Hyperscale table partitioning, along with Best Practices &amp;amp; Recommendations, can be found in a two-part technical blog: &amp;nbsp;&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/part-1---azure-sql-db-hyperscale-table-partitioning---best-practices--recommenda/3720569" target="_blank" rel="noopener"&gt;Part 1&lt;/A&gt; and &lt;A href="https://techcommunity.microsoft.com/blog/modernizationbestpracticesblog/part-2---azure-sql-db-hyperscale-table-partitioning---best-practices--recommenda/3721453" target="_blank" rel="noopener"&gt;Part 2&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Indexed Views&lt;/STRONG&gt;, also known as Materialized Views, is a SQL Server feature that can improve the performance of queries on large tables. Indexed views store the result set of a query physically on a disk, which allows for faster retrieval. Using Indexed views for Full-Text indexing can enhance the performance and maintainability of large datasets. Details on how to create an indexed view are provided &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-server-ver16" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Full Text Search&lt;/STRONG&gt; is a robust feature that facilitates complex search on text data stored in SQL tables. This feature supports efficient querying in extensive text fields such as documents, articles, and product descriptions through a specialized index known as a Full Text Index. Full-Text Search uses advanced indexing and query functions to perform complex searches, such as phrase matching and linguistic analysis, with high accuracy and speed. It ranks results based on relevance, ensuring the most pertinent results appear at the top. Search results can be refined with logical operators (AND/OR/NOT) and word weights.&lt;/P&gt;
&lt;P&gt;The system supports rich query syntax for natural language queries and is compatible with multiple languages and character sets, making it suitable for global applications. A comprehensive overview and setup instructions for Full-Text Search can be found in the document &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/full-text-search?view=sql-server-ver16" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;Key concepts of FULL-TEXT Search are &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/create-and-manage-full-text-catalogs?view=sql-server-ver16" target="_blank" rel="noopener"&gt;Full Text Catalog&lt;/A&gt; and &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/create-and-manage-full-text-indexes?view=sql-server-ver16" target="_blank" rel="noopener"&gt;Full-Text Index&lt;/A&gt;.&lt;/P&gt;
&lt;/DIV&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc190279204"&gt;&lt;/A&gt;Optimizing Full-Text Search in SQL Server&lt;/H1&gt;
&lt;P&gt;This blog post explains how to implement Text Search by using a combination of different but complementary built-in features like Full-Text Search, Partitioning, and Indexed views.&lt;/P&gt;
&lt;P&gt;In this scenario, we are implementing the following techniques&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The base table is large and therefore partitioned into smaller chunks of data&lt;/LI&gt;
&lt;LI&gt;Each Indexed View will be built aligning to a single partition on the base table.&lt;/LI&gt;
&lt;LI&gt;Each Full Text Index will be built aligning to a single Indexed View.&lt;/LI&gt;
&lt;LI&gt;Full Text Catalogs will hold the relevant build information for the Full Text Indexes.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;U&gt;Key Benefits in this architecture for VLDB:&lt;/U&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Provides a scalable model for growing data sizes.&lt;/LI&gt;
&lt;LI&gt;Partitioning the base table provides better manageability and optimized performance.&lt;/LI&gt;
&lt;LI&gt;Indexed views aligned with Partitioning Key allows quicker population of Indexed view.&lt;/LI&gt;
&lt;LI&gt;Having Full text catalog aligned with indexed view allows to rebuild / reorganize Full text catalog for specific subset.&lt;/LI&gt;
&lt;LI&gt;Full Text Index aligned with Indexed view provides manageable sized indexes and quicker crawling.&lt;/LI&gt;
&lt;LI&gt;Ability to rebuild / reorganize specific full-text indexes and improved query performance.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note: &lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt;Full-Text index population is an asynchronous activity. For a given Index view / table, only one Full-Text index is allowed. With in a full-text index multiple columns can be included.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;Below diagram provides a representation of how we can implement Full-Text search on very large tables.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&lt;U&gt;Steps highlighted:&lt;/U&gt;&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;STRONG&gt;1&lt;/STRONG&gt; Partitioning base table.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;STRONG&gt;2&lt;/STRONG&gt; Indexed views aligned with partition key.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;STRONG&gt;3&lt;/STRONG&gt; Full-Text Catalogs.&lt;/P&gt;
&lt;P class="lia-indent-padding-left-60px"&gt;&lt;STRONG&gt;4&lt;/STRONG&gt; Full-Text Indexes.&lt;/P&gt;
&lt;H2&gt;Partitioning&lt;/H2&gt;
&lt;P&gt;Choosing a partition key that evenly distributes data yields optimal results and is the most important parameter for partitioning. Partition switching/splitting divides a large partition into smaller ones when it grows over time. Here, if the partitioning column is STATE, the base table splits data based on STATE.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;Sample Table definition along with partitioning:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Partition Function Creation CREATE PARTITION FUNCTION [PF_STATE](varchar(2)) 
AS RANGE FOR VALUES 
('AL', 'AK', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'FL', 'GA', 
'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 
'NE', 'NV', 'NH', 'NJ', 'NM', 'NY', 'NC', 'ND', 'OH', 'OK', 'OR', 'PA', 'RI', 'SC', 'SD', 
'TN', 'TX', 'UT', 'VT', 'VA', 'WA', 'WV', 'WI', 'WY') &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Partition Scheme Creation 
CREATE PARTITION SCHEME [PS_STATE] AS PARTITION [PF_STATE] ALL TO ([PRIMARY]) &lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Table DDL with partition Key. 

CREATE TABLE [dbo].[CUSTOMERINFO]
( [CUSTID] [int] NOT NULL, [CUSTNAME] [varchar](40), 
[ADDRESS] [char](60), [CITY] [char](20), STATE varchar(2), 
[ZIP] [char](10) , [COUNTRY] [char](15), CUSTINFO nvarchar(max) 
PRIMARY KEY CLUSTERED 
( [CUSTID] ASC, STATE ASC )
WITH (STATISTICS_NORECOMPUTE = OFF, 
IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ) 
ON [PS_STATE](STATE);&lt;/LI-CODE&gt;
&lt;H2&gt;Indexed view&lt;/H2&gt;
&lt;P&gt;To improve efficiency, multiple indexed views are created and aligned with the partition key column. Indexed views are schema-bound and store the results of the view’s query, so the results are precomputed and stored on disk. For large tables, creating an indexed view that aligns with the partitioned column helps to quickly populate the indexed view and aids in updating the views when the base table changes. Multiple indexed views enable the creation of multiple full-text indexes, optimizing the search process.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;EM&gt;Indexed view should have an unique index so this can be utilized during full-text index creation.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Indexed views along with clustered index:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;-- Create Indexed view for STATE AL and clustered index which is subsequently used for FullText Index. CREATE VIEW [dbo].[v_CUSTOMERINFO_AL] WITH SCHEMABINDING AS select CUSTID ,CUSTNAME,ADDRESS,CITY,STATE,ZIP,COUNTRY,CUSTINFO From dbo.CUSTOMERINFO where STATE = 'AL' ; GO CREATE UNIQUE CLUSTERED INDEX [CX_CUSTOMERINFO_AL] ON [dbo].[v_CUSTOMERINFO_AL] ( CUSTID ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY] GO -- Create Indexed view for STATE AK and clustered index which is subsequently used for FullText Index. CREATE VIEW [dbo].[v_CUSTOMERINFO_AK] WITH SCHEMABINDING AS select CUSTID ,CUSTNAME,ADDRESS,CITY,STATE,ZIP,COUNTRY,CUSTINFO From dbo.CUSTOMERINFO where STATE = 'AK' ; GO CREATE UNIQUE CLUSTERED INDEX [CX_CUSTOMERINFO_AK] ON [dbo].[v_CUSTOMERINFO_AK] ( CUSTID ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY] GO&lt;/LI-CODE&gt;
&lt;H2&gt;Full-Text Search&amp;nbsp;&lt;/H2&gt;
&lt;H3&gt;Full-Text catalog&lt;/H3&gt;
&lt;P&gt;A Full-Text catalog is a logical container for full-text indexes, independent of any table. One catalog can be used for indexes on different tables. For databases with large or growing text data, use multiple catalogs.&lt;/P&gt;
&lt;P&gt;Rebuild the Full-Text catalog after significant data changes (bulk inserts, large updates) to improve performance and reduce index fragmentation. Include this rebuild in regular database maintenance activities.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Full Text Catalog:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--FullText Catalog for STATE AL and AK. CREATE FULLTEXT CATALOG CT_CUSTOMERINFO_AL AS DEFAULT; CREATE FULLTEXT CATALOG CT_CUSTOMERINFO_AK AS DEFAULT;&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;To change properties of Full-Text Catalog like REORGANIZE and REBUILD provided&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-fulltext-catalog-transact-sql?view=sql-server-ver16" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;H3&gt;Full-Text Index&lt;/H3&gt;
&lt;P&gt;A table or indexed view can only have one full-text index, which can cover multiple columns. These indexes are linked to a unique index of the base table and attached to a full-text catalog. They can be created on char-based data types and XML. The process of creating and maintaining full-text indexes is called population or crawl, which is asynchronous and updates with changes to the underlying table or view and can be maintained with the "&lt;EM&gt;Change_Tracking"&lt;/EM&gt; property.&lt;/P&gt;
&lt;P&gt;For large tables, aligning full-text indexes with indexed views allows for multiple manageable indexes, improving performance and speeding up data updates. Ensure high I/O file groups are allocated for full-text indexes in SQL Server installations.&lt;/P&gt;
&lt;P&gt;Fulltext index comes with an option of adding &amp;amp; removing columns as well as enabling or disabling it as needed.&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&lt;/STRONG&gt;&lt;EM&gt; During Bulk data load (or other unique scenarios), FULL-TEXT index population (i.e. Change_Tracking) can be set to Manual / OFF and post load activity, &lt;/EM&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/populate-full-text-indexes?view=sql-server-ver16#types" target="_blank" rel="noopener"&gt;Populate Full-Text&lt;/A&gt;&lt;EM&gt; Index and reset Change_Tracking to AUTO.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Full Text Index:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--Full Text Index aligned to Clustered Index on Indexed view and Full-Text Catalog. CREATE FULLTEXT INDEX ON [dbo].[v_CUSTOMERINFO_AL](CUSTINFO) KEY INDEX [CX_CUSTOMERINFO_AL] ON CT_CUSTOMERINFO_AL WITH CHANGE_TRACKING = AUTO; CREATE FULLTEXT INDEX ON [dbo].[v_CUSTOMERINFO_AK](CUSTINFO) KEY INDEX [CX_CUSTOMERINFO_AK] ON CT_CUSTOMERINFO_AK WITH CHANGE_TRACKING = AUTO&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Full-Text Index properties that can be changed are listed&amp;nbsp;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-fulltext-index-transact-sql?view=sql-server-ver16" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/P&gt;
&lt;H2&gt;Sample queries using Full-Text search&lt;/H2&gt;
&lt;P&gt;Full Text samples with contains and more examples are &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/query-with-full-text-search?view=sql-server-ver16" target="_blank" rel="noopener"&gt;here&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Sample 1: Contains with OR&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--To Identify the CUSTINFO column which has words ‘Mobile’ Or ‘Address’. select * from dbo.[v_CUSTOMERINFO_AL] where contains (CUSTINFO,'Mobile or Address')&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;Sample 2: Contains with AND&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--To Identify the CUSTINFO column which has words ‘Mobile’ AND ‘Address’. select CUSTID,CUSTNAME,CUSTINFO from dbo.[v_CUSTOMERINFO_AL] where contains (CUSTINFO,'Mobile and Address')&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;P&gt;Details about Ranking are in &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/limit-search-results-with-rank?view=sql-server-ver16" target="_blank" rel="noopener"&gt;Full-Text search with Ranking&lt;/A&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&lt;/STRONG&gt; &lt;EM&gt;While SQL optimizer&lt;/EM&gt;&lt;EM&gt; selects best execution plan for a query, for very specific and unique scenarios &lt;/EM&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql?view=sql-server-ver16" target="_blank" rel="noopener"&gt;&lt;EM&gt;Hints&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt; also can be used to optimize further. &lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;In this specific use-case, Indexed View Matching functionality can be implemented using the &lt;/EM&gt;&lt;A href="https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table?view=sql-server-ver16#noexpand" target="_blank" rel="noopener"&gt;&lt;EM&gt;NOEXPAND&lt;/EM&gt;&lt;/A&gt;&lt;EM&gt; Table Hint which drastically improves performance. By adding this hint, the query optimizer uses the index on the view if a query contains references to columns that are present both in an indexed view and base tables, and the query optimizer determines that using the indexed view provides the best method for executing the query.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Please perform due diligence prior to using Hints.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H2&gt;Full-Text Index Maintenance and validation&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/populate-full-text-indexes?view=sql-server-ver16" target="_blank" rel="noopener"&gt;Types of Full-Text Index population.&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/improve-the-performance-of-full-text-indexes?view=sql-server-ver16" target="_blank" rel="noopener"&gt;How to Improve the Performance of Full-Text indexes&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Sample Queries that helps in Full Text Maintenance and validation:&lt;/STRONG&gt;&lt;/P&gt;
&lt;LI-CODE lang="sql"&gt;--How to check Full Text index population status SELECT object_name(object_id) as tablename ,change_tracking_state_desc, has_crawl_completed,crawl_type_desc,crawl_start_date,crawl_end_date FROM sys.fulltext_indexes where object_name(object_id) like 'v_CUSTOMERINFO%'&lt;/LI-CODE&gt;&lt;img /&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc130894420"&gt;&lt;/A&gt;Additional Information&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;If more advanced AI based search capabilities are needed, Azure provides solutions like &lt;A href="https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search" target="_blank" rel="noopener"&gt;Azure AI Search&lt;/A&gt;. These are different offerings from Azure SQL and must be deployed separately and integrated appropriately.&lt;/LI&gt;
&lt;LI&gt;If the use-case involves Vector search, Azure SQL DB can work with vectors and details are &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/vectors/vectors-sql-server?view=azuresqldb-current#vector-search" target="_blank" rel="noopener"&gt;here&lt;/A&gt;.&lt;/LI&gt;
&lt;LI&gt;As the SQL Server Full Text functionality keeps advancing, there are specific features that get deprecated. These can be found &lt;A href="https://learn.microsoft.com/en-us/sql/relational-databases/search/deprecated-full-text-search-features-in-sql-server-2016?view=sql-server-ver16#features-not-supported-in-a-future-version-of-sql-server" target="_blank" rel="noopener"&gt;here&lt;/A&gt;. When building new applications using Full Text, we recommend that this list is being considered for future proofing your design.&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;&lt;A class="lia-anchor" target="_blank" name="_Toc190279213"&gt;&lt;/A&gt;Feedback and suggestions&amp;nbsp;&lt;/H1&gt;
&lt;P&gt;If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Ninja Engineering Team (&lt;A href="mailto:datasqlninja@microsoft.com" target="_blank" rel="noopener"&gt;datasqlninja@microsoft.com&lt;/A&gt;). Thanks for your support!&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Note:&lt;/EM&gt;&lt;/STRONG&gt; For additional information about migrating various source databases to Azure, see the &lt;A href="https://datamigration.microsoft.com/" target="_blank" rel="noopener"&gt;Azure Database Migration Guide&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 18 Apr 2025 18:24:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/modernization-best-practices-and/best-practices-for-modernizing-large-scale-text-search-from/ba-p/4377782</guid>
      <dc:creator>anilkota</dc:creator>
      <dc:date>2025-04-18T18:24:15Z</dc:date>
    </item>
  </channel>
</rss>

