%3CLINGO-SUB%20id%3D%22lingo-sub-1327004%22%20slang%3D%22en-US%22%3EHow%20to%20isolate%20and%20troubleshoot%20synchronize%20latency%20issue%20between%20master%20and%20replica%20for%20MySQL%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1327004%22%20slang%3D%22en-US%22%3E%3CP%3EIn%20most%20cases%2C%20latency%20between%20master%20and%20replicas%20is%20caused%20by%20performance%20issues.%20Therefore%2C%20when%20investigating%20a%20synchronization%20lag%2C%20start%20by%20checking%20if%20the%20workload%20on%20the%20master%20has%20increased.%20In%20this%20article%2C%20you'll%20learn%20how%20to%20investigate%20and%20isolate%20the%20possible%20causes%20of%20replication%20latency.%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBefore%20starting%2C%20it's%20critical%20to%20understand%20how%20replication%20in%20MySQL%20works.%20While%20MySQL%20supports%20different%20types%20of%20data%20synchronization%2C%20Azure%20Database%20for%20MySQL%20only%20supports%20asynchronous%20replication%20(applies%20to%20both%20data-in%20replication%20and%20read%20replicas).%20With%20asynchronous%20replication%2C%20one%20server%20acts%20as%20a%20master%20and%20one%20or%20more%20other%20servers%20act%20as%20replicas%20(a%20max%20of%205%20replicas%20is%20supported).%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWhen%20replication%20starts%2C%20a%20master%20server%20writes%20replicating%20events%20into%20the%26nbsp%3B%3CEM%3Ebinary%20log%20%3C%2FEM%3E(which%20only%20records%20the%20committed%20transactions).%20This%20is%20why%20enabling%20the%20binary%20log%20is%20required%20when%20configuring%20replication%20in%20the%20first%20place.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOn%20replica%20servers%2C%20there%20are%20two%20threads%20per%20replica%2C%20one%20called%20the%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3EIO%20thread%3C%2FEM%3E%26nbsp%3Band%20the%20other%20called%20the%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3ESQL%20thread%3C%2FEM%3E.%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20IO%20thread%20connects%20to%20the%20master%20server%20and%20requests%20updated%20binary%20logs.%20After%20this%20thread%20receives%20the%20binary%20log%20updates%2C%20they%20are%20saved%26nbsp%3Bon%20a%20replica%20server%2C%20in%20a%20local%20log%20called%20the%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3Erelay%20log.%3C%2FEM%3E%3C%2FLI%3E%0A%3CLI%3EThe%20SQL%20thread%20reads%20the%20relay%20log%20and%20apply%20the%20data%20change(s)%20on%20replica%20servers.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EThis%20process%20is%20shown%20in%20the%20following%20diagram%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22replica.png%22%20style%3D%22width%3A%20536px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F199140iA225BD11E8168AD5%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22replica.png%22%20alt%3D%22replica.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorShawnX_0%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3EYou%20can%20monitor%20the%20replication%20status%20from%20the%20Azure%20portal%20metrics%20blade.%20When%20latency%20occurs%2C%20an%20output%20of%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3E%3CEM%3E%5BSHOW%20SLAVE%20STATUS%3B%5D%3C%2FEM%3E%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Egives%20you%20enough%20information%20to%20understand%20what%20may%20cause%20the%20latency.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EA%20sample%20output%20is%20shown%20below%3A%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20%20language-markup%22%3E%3CCODE%3E***************************%201.%20row%20***************************%0ASlave_IO_State%3A%20Waiting%20for%20master%20to%20send%20event%0AMaster_Host%3A%20testserver.mysql.database.azure.com%0AMaster_User%3A%20replicationuser%40testserver%0AMaster_Port%3A%203306%0AConnect_Retry%3A%2060%0AMaster_Log_File%3A%20mysql-bin.000191%0ARead_Master_Log_Pos%3A%20103978138%0ARelay_Log_File%3A%20relay_bin.000568%0ARelay_Log_Pos%3A%2013469095%0ARelay_Master_Log_File%3A%20mysql-bin.000191%0ASlave_IO_Running%3A%20Yes%0ASlave_SQL_Running%3A%20Yes%0AReplicate_Do_DB%3A%20%0AReplicate_Ignore_DB%3A%20%0AReplicate_Do_Table%3A%20%0AReplicate_Ignore_Table%3A%20mysql.plugin%0AReplicate_Wild_Do_Table%3A%20%0AReplicate_Wild_Ignore_Table%3A%20mysql.%5C%5C_%5C%5C_%25%0ALast_Errno%3A%200%0ALast_Error%3A%20%0ASkip_Counter%3A%200%0AExec_Master_Log_Pos%3A%2013468882%0ARelay_Log_Space%3A%20103978599%0AUntil_Condition%3A%20None%0AUntil_Log_File%3A%20%0AUntil_Log_Pos%3A%200%0AMaster_SSL_Allowed%3A%20Yes%0AMaster_SSL_CA_File%3A%20c%3A%5C%5Cwork%5C%5Cazure_mysqlservice.pem%0AMaster_SSL_CA_Path%3A%20%0AMaster_SSL_Cert%3A%20c%3A%5C%5Cwork%5C%5Cazure_mysqlclient_cert.pem%0AMaster_SSL_Cipher%3A%20%0AMaster_SSL_Key%3A%20c%3A%5C%5Cwork%5C%5Cazure_mysqlclient_key.pem%0ASeconds_Behind_Master%3A%20648%0AMaster_SSL_Verify_Server_Cert%3A%20No%0ALast_IO_Errno%3A%200%0ALast_IO_Error%3A%20%0ALast_SQL_Errno%3A%200%0ALast_SQL_Error%3A%20%0AReplicate_Ignore_Server_Ids%3A%20%0AMaster_Server_Id%3A%20943553508%0AMaster_UUID%3A%20eba67879-7232-11e9-b113-6c4d3f52df68%0AMaster_Info_File%3A%20mysql.slave_master_info%0ASQL_Delay%3A%200%0ASQL_Remaining_Delay%3A%20NULL%0ASlave_SQL_Running_State%3A%20Waiting%20for%20dependent%20transaction%20to%20commit%0AMaster_Retry_Count%3A%2086400%0AMaster_Bind%3A%20%0ALast_IO_Error_Timestamp%3A%20%0ALast_SQL_Error_Timestamp%3A%20%0AMaster_SSL_Crl%3A%20%0AMaster_SSL_Crlpath%3A%20%0ARetrieved_Gtid_Set%3A%20%0AExecuted_Gtid_Set%3A%20%0AAuto_Position%3A%200%0AReplicate_Rewrite_DB%3A%20%0AChannel_Name%3A%20%0AMaster_TLS_Version%3A%20%0A1%20row%20in%20set%20(0.00%20sec)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20output%20contains%20a%20lot%20of%20information%2C%20but%20normally%20it's%20only%20important%20to%20focus%20on%20the%20following%20columns%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3ESlave_IO_State%3C%2FSTRONG%3E%3A%20This%20tells%20you%20the%20current%20status%20of%20the%20IO%20thread.%20Normally%2C%20the%20status%20is%20%22%3CEM%3EWaiting%20for%20master%20to%20send%20event%3C%2FEM%3E%22%20if%20it%20is%20synchronizing.%20However%2C%20if%20you%20see%20a%20status%20such%20as%26nbsp%3B%3CEM%3E%22Connecting%20to%20master%22%3C%2FEM%3E%2C%20then%20the%20replica%20has%20lost%20the%20connection%20to%20the%20master%20server.%20Please%20check%20if%20the%20master%20is%20running%20or%20if%20a%20firewall%20is%20blocking%20the%20connection.%26nbsp%3B%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EMaster_Log_File%3A%3C%2FSTRONG%3E%26nbsp%3BThis%20tells%20you%20the%20binary%20log%20file%20to%20which%20the%20master%20is%20writing.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ERead_Master_Log_Pos%3C%2FSTRONG%3E%3A%20This%20represents%20the%20position%20in%20the%20above%20binary%20log%20file%20in%20which%20the%20master%20is%20writing.%26nbsp%3B%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ERelay_Master_Log_File%3C%2FSTRONG%3E%3A%20This%20represents%20the%20binary%20log%20file%20that%20the%20replica%20server%20is%20reading%20from%20the%20master.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ESlave_IO_Running%3C%2FSTRONG%3E%3A%20This%20indicates%20whether%20the%20IO%20thread%20is%20running.%20It%20should%20be%20%22Yes%22.%20If%20%22NO%22%2C%20then%20most%20likely%20the%20replication%20is%20broken.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ESlave_SQL_Running%3C%2FSTRONG%3E%3A%20This%20indicates%20whether%20the%20SQL%20thread%20is%20running.%20It%20should%20be%20%22Yes%22.%20If%20%22NO%22%2C%20then%20most%20likely%20the%20replication%20is%20broken.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3EExec_Master_Log_Pos%3C%2FSTRONG%3E%3A%20This%20tells%20you%20which%20position%20of%20the%20above%26nbsp%3BRelay_Master_Log_File%20the%20replica%20is%20applying.%20If%20there%20is%20latency%2C%20this%20position%20sequence%20should%20be%20smaller%20than%26nbsp%3BRead_Master_Log_Pos.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ERelay_Log_Space%3C%2FSTRONG%3E%3A%20The%20upper%20limit%20of%20relay%20log%20size.%20You%20can%20check%20the%20size%20by%20querying%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CEM%3E%5Bshow%20global%20variables%20like%20%22relay_log_space_limit%22%3B%5D.%3C%2FEM%3E%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ESeconds_Behind_Master%3C%2FSTRONG%3E%3A%20This%20shows%20the%20replication%20latency%2C%20in%20seconds.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ELast_IO_Errno%3C%2FSTRONG%3E%3A%20This%20shows%20the%20IO%20thread%20error%20code%2C%20if%20any.%20For%20more%20information%20about%20these%20codes%2C%20see%20the%20MySQL%20docs%3A%26nbsp%3B-ERR%3AREF-NOT-FOUND-%3CA%20href%3D%22https%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.7%2Fen%2Fserver-error-reference.html%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.7%2Fen%2Fserver-error-reference.html%3C%2FA%3E.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ELast_IO_Error%3C%2FSTRONG%3E%3A%20This%20shows%20the%20IO%20thread%20error%20message%2C%20if%20any.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ELast_SQL_Errno%3C%2FSTRONG%3E%3A%20This%20shows%20the%20SQL%26nbsp%3Bthread%20error%20code%2C%20if%20any.%20For%20more%20information%20about%20these%20codes%2C%20see%20the%20MySQL%20docs%3A%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.7%2Fen%2Fserver-error-reference.html%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.7%2Fen%2Fserver-error-reference.html%3C%2FA%3E.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ELast_SQL_Error%3C%2FSTRONG%3E%3A%26nbsp%3B%20This%20shows%20the%20SQL%20thread%20error%20message%2C%20if%20any.%3C%2FLI%3E%0A%3CLI%3E%3CSTRONG%3ESlave_SQL_Running_State%3C%2FSTRONG%3E%3A%20This%20indicates%20the%20current%20SQL%20thread%20status.%20Please%20note%20that%20%22System%20lock%22%20shown%20in%20this%20state%20is%20a%20normal%20behavior.%20As%20the%20example%20above%20shows%2C%20%22Waiting%20for%20dependent%20transaction%20to%20commit%22%2C%20which%20means%20that%20it%20is%20waiting%20for%20the%20master%20to%20update%20committed%20transactions%2C%20is%20also%20normal.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3ENow%20let's%20see%20how%20to%20use%20the%20above%20information%20to%20isolate%20the%20latency%20cause.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EIf%20the%20below%20scenario%20met%2C%20then%20it%20is%20most%20likely%20that%20the%20cause%20is%20network%20latency.%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EAs%20seen%2C%20the%20IO%20thread%20is%20running%20and%20waiting%20on%20the%20master%2C%20and%20the%20master%20has%20already%20written%20to%20binary%20log%20file%20%2320%20while%20replica%20can%20only%20get%20file%20%2310.%20Since%20the%20IO%20thread%20connects%20to%20master%20via%20TCP%2FIP%2C%20the%20only%20impact%20factor%20here%20is%20the%20network%20speed.%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20%20language-markup%22%3E%3CCODE%3ESlave_IO_State%3A%20Waiting%20for%20master%20to%20send%20event%0AMaster_Log_File%3A%20the%20binary%20file%20sequence%20is%20larger%20then%20Relay_Master_Log_File%2C%20e.g.%20mysql-bin.00020%0ARelay_Master_Log_File%3A%20the%20file%20sequence%20is%20smaller%20then%20above%2C%20e.g.%20mysql-bin.00010%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EIf%20the%20below%20scenario%20met%2C%20then%20it%20is%20most%20likely%20that%20the%20workload%20on%20the%20master%20is%20too%20heavy%20and%20is%20burdening%20the%20replica.%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FSTRONG%3EFor%20the%20case%20below%2C%20though%20the%20replica%20can%20retrieve%20the%20binary%20log%20behind%20the%20master%2C%20the%20replica%20IO%20thread%20indicates%20that%20the%20relay%20log%20space%20is%20full%20already.%20So%20network%20speed%20isn't%20causing%20the%20delay%2C%20because%20the%20replica%20has%20already%20been%20trying%20to%20catch%20up%20as%20much%20as%20it%20can.%20Instead%2C%20the%20updated%20binary%20log%20size%20exceeds%20the%20upper%20limit%20of%20the%20relay%20log%20space.%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20%20language-markup%22%3E%3CCODE%3ESlave_IO_State%3A%20Waiting%20for%20the%20slave%20SQL%20thread%20to%20free%20enough%20relay%20log%20space%0AMaster_Log_File%3A%20the%20binary%20file%20sequence%20is%20larger%20then%20Relay_Master_Log_File%2C%20e.g.%20mysql-bin.00020%0ARelay_Master_Log_File%3A%20the%20file%20sequence%20is%20smaller%20then%20above%2C%20e.g.%20mysql-bin.00010%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EIf%20the%20below%20scenario%20met%2C%20then%20it%20is%20most%20likely%20that%20the%20replica%20somehow%20is%20getting%20slower.%26nbsp%3B%3C%2FSTRONG%3EFor%20below%20cases%2C%20both%20the%20IO%20and%20SQL%20threads%20are%20running%20well%20and%20the%20replica%20is%20reading%20the%20same%20binary%20log%20file%20as%20the%20master%20writes.%20However%2C%20some%20latency%20still%20occurs.%20Normally%2C%20this%20could%20be%20caused%20by%20the%20slave%20server%20itself%3B%20check%20to%20see%20if%20the%20CPU%20consumption%20or%20IOPS%20on%20the%20replica%20server%20is%20high.%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20%20language-markup%22%3E%3CCODE%3ESlave_IO_State%3A%20Waiting%20for%20master%20to%20send%20event%0AMaster_Log_File%3A%20The%20binary%20log%20file%20sequence%20equals%20to%20Relay_Master_Log_File%2C%20e.g.%20mysql-bin.000191%0ARead_Master_Log_Pos%3A%20The%20position%20of%20master%20server%20written%20to%20the%20above%20file%20is%20larger%20than%20Relay_Log_Pos%2C%20e.g.%20103978138%0ARelay_Master_Log_File%3A%20mysql-bin.000191%0ASlave_IO_Running%3A%20Yes%0ASlave_SQL_Running%3A%20Yes%0AExec_Master_Log_Pos%3A%20The%20position%20of%20slave%20reads%20from%20master%20binary%20log%20file%20is%20smaller%20than%20Read_Master_Log_Pos%2C%20e.g.%2013468882%0ASeconds_Behind_Master%3A%20There%20is%20latency%20and%20the%20value%20here%20is%20greater%20than%200%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%3CSTRONG%3ENote%20that%20there%20is%20one%20exception%20for%20the%20scenario%20above.%20%3C%2FSTRONG%3EIn%20Azure%20Database%20for%20MySQL%2C%20replication%20is%20optimized%20for%20high%20concurrency%20environments%20with%20the%20parameter%26nbsp%3B%3CEM%3Ebinlog_group_commit_sync_delay%26nbsp%3B%3C%2FEM%3Eis%20configured.%20This%20parameter%20controls%20how%20many%20microseconds%20the%20binary%20log%20commit%20waits%20before%20synchronizing%20the%20binary%20log%20file.%20The%20benefit%20is%20that%20instead%20of%20immediately%20applying%20every%20transaction%20committed%2C%20the%20master%20send%20the%20binary%20log%20updates%20in%20bulk.%20This%20reduces%20IO%20on%20the%20replica%20and%20helps%20to%20improve%20performance.%20However%2C%20for%20some%20edge%20cases%20that%20are%20not%20in%20a%20high-concurrent%20scenario%2C%20e.g.%20one%20connection%20only%20runs%20one%20transaction%20and%20then%20the%20connection%20closes%2C%20setting%20%3CEM%3Ebinlog_group_commit_sync_delay%3C%2FEM%3E%20can%20cause%20serious%20latency%20because%20the%20IO%20thread%20is%20waiting%20for%20bulk%20binary%20log%20updates%20while%20only%20one%20transaction%20is%20committed.%20This%20means%20that%20the%20waiting%20effort%20is%20in%20vain.%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20most%20cases%2C%20the%20latency%20root%20cause%20falls%20into%20one%20of%20the%20scenarios%20covered%20in%20this%20blog.%20I%20hope%20the%20above%20information%20helps.%20Please%20feel%20free%20to%20post%20comments%20or%20reach%20out%20to%20me%20for%20a%20discussion.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThank%20you!%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EShawn%20Xiao%3C%2FP%3E%0A%3CP%3ETechnical%20Support%20Engineer%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1327004%22%20slang%3D%22en-US%22%3E%3CP%3EThis%20blog%20will%20show%20you%20how%20to%20investigate%20possible%20causes%20of%20replication%20latency%20between%20master%20and%20replica%20.%3C%2FP%3E%3C%2FLINGO-TEASER%3E
Microsoft

In most cases, latency between master and replicas is caused by performance issues. Therefore, when investigating a synchronization lag, start by checking if the workload on the master has increased. In this article, you'll learn how to investigate and isolate the possible causes of replication latency. 

 

Before starting, it's critical to understand how replication in MySQL works. While MySQL supports different types of data synchronization, Azure Database for MySQL only supports asynchronous replication (applies to both data-in replication and read replicas). With asynchronous replication, one server acts as a master and one or more other servers act as replicas (a max of 5 replicas is supported).

 

When replication starts, a master server writes replicating events into the binary log (which only records the committed transactions). This is why enabling the binary log is required when configuring replication in the first place.

 

On replica servers, there are two threads per replica, one called the IO thread and the other called the SQL thread.

  • The IO thread connects to the master server and requests updated binary logs. After this thread receives the binary log updates, they are saved on a replica server, in a local log called the relay log.
  • The SQL thread reads the relay log and apply the data change(s) on replica servers.

This process is shown in the following diagram:

 

replica.png

 

You can monitor the replication status from the Azure portal metrics blade. When latency occurs, an output of [SHOW SLAVE STATUS;] gives you enough information to understand what may cause the latency.

 

A sample output is shown below:

*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: testserver.mysql.database.azure.com
Master_User: replicationuser@testserver
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000191
Read_Master_Log_Pos: 103978138
Relay_Log_File: relay_bin.000568
Relay_Log_Pos: 13469095
Relay_Master_Log_File: mysql-bin.000191
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: 
Replicate_Ignore_DB: 
Replicate_Do_Table: 
Replicate_Ignore_Table: mysql.plugin
Replicate_Wild_Do_Table: 
Replicate_Wild_Ignore_Table: mysql.\\_\\_%
Last_Errno: 0
Last_Error: 
Skip_Counter: 0
Exec_Master_Log_Pos: 13468882
Relay_Log_Space: 103978599
Until_Condition: None
Until_Log_File: 
Until_Log_Pos: 0
Master_SSL_Allowed: Yes
Master_SSL_CA_File: c:\\work\\azure_mysqlservice.pem
Master_SSL_CA_Path: 
Master_SSL_Cert: c:\\work\\azure_mysqlclient_cert.pem
Master_SSL_Cipher: 
Master_SSL_Key: c:\\work\\azure_mysqlclient_key.pem
Seconds_Behind_Master: 648
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error: 
Last_SQL_Errno: 0
Last_SQL_Error: 
Replicate_Ignore_Server_Ids: 
Master_Server_Id: 943553508
Master_UUID: eba67879-7232-11e9-b113-6c4d3f52df68
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for dependent transaction to commit
Master_Retry_Count: 86400
Master_Bind: 
Last_IO_Error_Timestamp: 
Last_SQL_Error_Timestamp: 
Master_SSL_Crl: 
Master_SSL_Crlpath: 
Retrieved_Gtid_Set: 
Executed_Gtid_Set: 
Auto_Position: 0
Replicate_Rewrite_DB: 
Channel_Name: 
Master_TLS_Version: 
1 row in set (0.00 sec)

 

The output contains a lot of information, but normally it's only important to focus on the following columns:

  • Slave_IO_State: This tells you the current status of the IO thread. Normally, the status is "Waiting for master to send event" if it is synchronizing. However, if you see a status such as "Connecting to master", then the replica has lost the connection to the master server. Please check if the master is running or if a firewall is blocking the connection. 
  • Master_Log_File: This tells you the binary log file to which the master is writing.
  • Read_Master_Log_Pos: This represents the position in the above binary log file in which the master is writing. 
  • Relay_Master_Log_File: This represents the binary log file that the replica server is reading from the master.
  • Slave_IO_Running: This indicates whether the IO thread is running. It should be "Yes". If "NO", then most likely the replication is broken.
  • Slave_SQL_Running: This indicates whether the SQL thread is running. It should be "Yes". If "NO", then most likely the replication is broken.
  • Exec_Master_Log_Pos: This tells you which position of the above Relay_Master_Log_File the replica is applying. If there is latency, this position sequence should be smaller than Read_Master_Log_Pos.
  • Relay_Log_Space: The upper limit of relay log size. You can check the size by querying [show global variables like "relay_log_space_limit";].
  • Seconds_Behind_Master: This shows the replication latency, in seconds.
  • Last_IO_Errno: This shows the IO thread error code, if any. For more information about these codes, see the MySQL docs: https://dev.mysql.com/doc/refman/5.7/en/server-error-reference.html.
  • Last_IO_Error: This shows the IO thread error message, if any.
  • Last_SQL_Errno: This shows the SQL thread error code, if any. For more information about these codes, see the MySQL docs: https://dev.mysql.com/doc/refman/5.7/en/server-error-reference.html.
  • Last_SQL_Error:  This shows the SQL thread error message, if any.
  • Slave_SQL_Running_State: This indicates the current SQL thread status. Please note that "System lock" shown in this state is a normal behavior. As the example above shows, "Waiting for dependent transaction to commit", which means that it is waiting for the master to update committed transactions, is also normal.

Now let's see how to use the above information to isolate the latency cause.

 

If the below scenario met, then it is most likely that the cause is network latency. As seen, the IO thread is running and waiting on the master, and the master has already written to binary log file #20 while replica can only get file #10. Since the IO thread connects to master via TCP/IP, the only impact factor here is the network speed.

Slave_IO_State: Waiting for master to send event
Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
Relay_Master_Log_File: the file sequence is smaller then above, e.g. mysql-bin.00010

 

If the below scenario met, then it is most likely that the workload on the master is too heavy and is burdening the replica. For the case below, though the replica can retrieve the binary log behind the master, the replica IO thread indicates that the relay log space is full already. So network speed isn't causing the delay, because the replica has already been trying to catch up as much as it can. Instead, the updated binary log size exceeds the upper limit of the relay log space.

Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space
Master_Log_File: the binary file sequence is larger then Relay_Master_Log_File, e.g. mysql-bin.00020
Relay_Master_Log_File: the file sequence is smaller then above, e.g. mysql-bin.00010

 

If the below scenario met, then it is most likely that the replica somehow is getting slower. For below cases, both the IO and SQL threads are running well and the replica is reading the same binary log file as the master writes. However, some latency still occurs. Normally, this could be caused by the slave server itself; check to see if the CPU consumption or IOPS on the replica server is high.

Slave_IO_State: Waiting for master to send event
Master_Log_File: The binary log file sequence equals to Relay_Master_Log_File, e.g. mysql-bin.000191
Read_Master_Log_Pos: The position of master server written to the above file is larger than Relay_Log_Pos, e.g. 103978138
Relay_Master_Log_File: mysql-bin.000191
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Exec_Master_Log_Pos: The position of slave reads from master binary log file is smaller than Read_Master_Log_Pos, e.g. 13468882
Seconds_Behind_Master: There is latency and the value here is greater than 0

Note that there is one exception for the scenario above. In Azure Database for MySQL, replication is optimized for high concurrency environments with the parameter binlog_group_commit_sync_delay is configured. This parameter controls how many microseconds the binary log commit waits before synchronizing the binary log file. The benefit is that instead of immediately applying every transaction committed, the master send the binary log updates in bulk. This reduces IO on the replica and helps to improve performance. However, for some edge cases that are not in a high-concurrent scenario, e.g. one connection only runs one transaction and then the connection closes, setting binlog_group_commit_sync_delay can cause serious latency because the IO thread is waiting for bulk binary log updates while only one transaction is committed. This means that the waiting effort is in vain. 

 

In most cases, the latency root cause falls into one of the scenarios covered in this blog. I hope the above information helps. Please feel free to post comments or reach out to me for a discussion.

 

Thank you!

 

Shawn Xiao

Technical Support Engineer