Database Copy may cause Azure SQL Database to run out of transaction log space

Published 03-13-2019 07:04 PM 830 Views
Microsoft
First published on MSDN on Sep 27, 2018

Summary:


This article describes how a Copy Database operation may cause the source database to run out of transaction log space. The client applications may then encountered the error 9002: "The transaction log for database 'xxxxx’ is full due to ‘AVAILABILITY_REPLICA'". This description is derived from an actual support case.



Scenario:

Each day between 04:00h and 09:00h UTC, a large data load is refreshing the content of a Premium P6 database. This database is the primary node of a geo-replication configuration. The geo-replication secondary is running on the same data center and is used for load balancing. Before each data load, usually starting at 00:10h UTC, a database copy is performed off the geo-replication secondary. The copy usually takes around 3 hours to complete and finishes before the load operation starts. The copied database is running as a Premium P1; it allows to compare the data before and after the data load, and serves as a backup in case something goes wrong with the data load itself.



Details about the problem:

  • One day the Copy Database operation got interrupted at around 02:00h UTC because of a local database reconfiguration/failover.

  • After the interruption, the copy had to restart from the beginning. Thus it had already lost almost 2 hours of its operation window.

  • Due to the delay, the copy process was still running when the load operation started. This made the situation worse: the extra resource consumption was limiting the throughput on the database, and there was more data to be transferred as part of the copy. In addition, the target database was running at a much lower performance level than the source database. This caused the copy process to take much longer than usual.

  • The copy process is done via a backup of the source database and a restore to the destination running over the datacenter network.

  • While the operation is running, it is holding a lock on the transaction log of its source database. The transaction log needs to be held so that data changes occurring during the copy process can be applied at its end.

  • If the log truncation is held at a geo-replication secondary, it will also hold the log truncation on the geo-replication primary.

  • This caused the transaction log on the geo-replication primary to keep growing until it ran out of space. The client applications then encountered the error 9002: "The transaction log for database 'xxxxx’ is full due to 'AVAILABILITY_REPLICA'".

  • The transaction log was finally truncated after the copy operation had finished at about 13:00 UTC.



Possible steps to avoid the issue:




  • Use a point-in-time-restore (PITR) instead of a database copy for this daily workflow. The PITR process runs completely offline (backup is restored from storage instead of from the live database) and therefore cannot impact the primary. With the restore process you also have more control over selecting the exact point-in-time of the copy.

  • Start the copy process earlier, allowing for enough time for retries in case it gets interrupted. But this may not always work in case the copy operation is taking longer than expected, e.g. due to fluctuations in the network speed.


For the future, the plan is to remove the requirement to hold the transaction log during a database copy. It is still unsure though when this change will become available.

%3CLINGO-SUB%20id%3D%22lingo-sub-369065%22%20slang%3D%22en-US%22%3EDatabase%20Copy%20may%20cause%20Azure%20SQL%20Database%20to%20run%20out%20of%20transaction%20log%20space%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-369065%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3E%20First%20published%20on%20MSDN%20on%20Sep%2027%2C%202018%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CP%3E%3CSTRONG%3E%20Summary%3A%20%3C%2FSTRONG%3E%3C%2FP%3E%3CBR%20%2F%3E%20This%20article%20describes%20how%20a%20Copy%20Database%20operation%20may%20cause%20the%20source%20database%20to%20run%20out%20of%20transaction%20log%20space.%20The%20client%20applications%20may%20then%20encountered%20the%20error%209002%3A%20%22The%20transaction%20log%20for%20database%20'xxxxx%E2%80%99%20is%20full%20due%20to%20%E2%80%98AVAILABILITY_REPLICA'%22.%20This%20description%20is%20derived%20from%20an%20actual%20support%20case.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CSTRONG%3EScenario%3A%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Each%20day%20between%2004%3A00h%20and%2009%3A00h%20UTC%2C%20a%20large%20data%20load%20is%20refreshing%20the%20content%20of%20a%20Premium%20P6%20database.%20This%20database%20is%20the%20primary%20node%20of%20a%20geo-replication%20configuration.%20The%20geo-replication%20secondary%20is%20running%20on%20the%20same%20data%20center%20and%20is%20used%20for%20load%20balancing.%20Before%20each%20data%20load%2C%20usually%20starting%20at%2000%3A10h%20UTC%2C%20a%20database%20copy%20is%20performed%20off%20the%20geo-replication%20secondary.%20The%20copy%20usually%20takes%20around%203%20hours%20to%20complete%20and%20finishes%20before%20the%20load%20operation%20starts.%20The%20copied%20database%20is%20running%20as%20a%20Premium%20P1%3B%20it%20allows%20to%20compare%20the%20data%20before%20and%20after%20the%20data%20load%2C%20and%20serves%20as%20a%20backup%20in%20case%20something%20goes%20wrong%20with%20the%20data%20load%20itself.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CSTRONG%3E%20Details%20about%20the%20problem%3A%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3EOne%20day%20the%20Copy%20Database%20operation%20got%20interrupted%20at%20around%2002%3A00h%20UTC%20because%20of%20a%20local%20database%20reconfiguration%2Ffailover.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EAfter%20the%20interruption%2C%20the%20copy%20had%20to%20restart%20from%20the%20beginning.%20Thus%20it%20had%20already%20lost%20almost%202%20hours%20of%20its%20operation%20window.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EDue%20to%20the%20delay%2C%20the%20copy%20process%20was%20still%20running%20when%20the%20load%20operation%20started.%20This%20made%20the%20situation%20worse%3A%20the%20extra%20resource%20consumption%20was%20limiting%20the%20throughput%20on%20the%20database%2C%20and%20there%20was%20more%20data%20to%20be%20transferred%20as%20part%20of%20the%20copy.%20In%20addition%2C%20the%20target%20database%20was%20running%20at%20a%20much%20lower%20performance%20level%20than%20the%20source%20database.%20This%20caused%20the%20copy%20process%20to%20take%20much%20longer%20than%20usual.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EThe%20copy%20process%20is%20done%20via%20a%20backup%20of%20the%20source%20database%20and%20a%20restore%20to%20the%20destination%20running%20over%20the%20datacenter%20network.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EWhile%20the%20operation%20is%20running%2C%20it%20is%20holding%20a%20lock%20on%20the%20transaction%20log%20of%20its%20source%20database.%20The%20transaction%20log%20needs%20to%20be%20held%20so%20that%20data%20changes%20occurring%20during%20the%20copy%20process%20can%20be%20applied%20at%20its%20end.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EIf%20the%20log%20truncation%20is%20held%20at%20a%20geo-replication%20secondary%2C%20it%20will%20also%20hold%20the%20log%20truncation%20on%20the%20geo-replication%20primary.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EThis%20caused%20the%20transaction%20log%20on%20the%20geo-replication%20primary%20to%20keep%20growing%20until%20it%20ran%20out%20of%20space.%20The%20client%20applications%20then%20encountered%20the%20error%209002%3A%20%22The%20transaction%20log%20for%20database%20'xxxxx%E2%80%99%20is%20full%20due%20to%20'AVAILABILITY_REPLICA'%22.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EThe%20transaction%20log%20was%20finally%20truncated%20after%20the%20copy%20operation%20had%20finished%20at%20about%2013%3A00%20UTC.%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3CP%3E%3CSTRONG%3E%20%3CBR%20%2F%3E%20Possible%20steps%20to%20avoid%20the%20issue%3A%20%3C%2FSTRONG%3E%3C%2FP%3E%3CBR%20%2F%3E%20%3CBR%20%2F%3E%3CUL%3E%3CBR%20%2F%3E%3CLI%3EUse%20a%20point-in-time-restore%20(PITR)%20instead%20of%20a%20database%20copy%20for%20this%20daily%20workflow.%20The%20PITR%20process%20runs%20completely%20offline%20(backup%20is%20restored%20from%20storage%20instead%20of%20from%20the%20live%20database)%20and%20therefore%20cannot%20impact%20the%20primary.%20With%20the%20restore%20process%20you%20also%20have%20more%20control%20over%20selecting%20the%20exact%20point-in-time%20of%20the%20copy.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EStart%20the%20copy%20process%20earlier%2C%20allowing%20for%20enough%20time%20for%20retries%20in%20case%20it%20gets%20interrupted.%20But%20this%20may%20not%20always%20work%20in%20case%20the%20copy%20operation%20is%20taking%20longer%20than%20expected%2C%20e.g.%20due%20to%20fluctuations%20in%20the%20network%20speed.%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FUL%3E%3CBR%20%2F%3E%3CP%3EFor%20the%20future%2C%20the%20plan%20is%20to%20remove%20the%20requirement%20to%20hold%20the%20transaction%20log%20during%20a%20database%20copy.%20It%20is%20still%20unsure%20though%20when%20this%20change%20will%20become%20available.%3C%2FP%3E%0A%20%0A%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-369065%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20MSDN%20on%20Sep%2027%2C%202018%20Summary%3AThis%20article%20describes%20how%20a%20Copy%20Database%20operation%20may%20cause%20the%20source%20database%20to%20run%20out%20of%20transaction%20log%20space.%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-369065%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20SQL%20Database%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Version history
Last update:
‎Mar 13 2019 07:04 PM
Updated by: