Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1103758%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issues%20for%20Log%20Analytics%20in%20WUS%202%20-%2001%2F12%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1103758%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Sunday%2C%2012%20January%202020%2020%3A08%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2001%2F12%2C%2016%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2001%2F12%2C%2011%3A30%20UTC%20and%20that%20during%20the%204%20hours%20and%2030%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20Customers%20in%20WUS2%20region%20may%20have%20experienced%20issues%20with%20data%20gaps%2C%20partial%20query%20results%20in%20Log%20Analytics%20and%20also%20delayed%20or%20missed%20alerts%20in%20Log%20Search%20Alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20one%20of%20our%20backend%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%204%20Hours%20%26amp%3B%2030%20minutes%20-%2001%2F12%2C%2011%3A30%20UTC%20through%2001%2F12%2C%2016%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jayadev%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1103758%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Sunday, 12 January 2020 20:08 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/12, 16:00 UTC. Our logs show the incident started on 01/12, 11:30 UTC and that during the 4 hours and 30 minutes that it took to resolve the issue Customers in WUS2 region may have experienced issues with data gaps, partial query results in Log Analytics and also delayed or missed alerts in Log Search Alerts.
  • Root Cause: The failure was due to one of our backend services.
  • Incident Timeline: 4 Hours & 30 minutes - 01/12, 11:30 UTC through 01/12, 16:00 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jayadev