%3CLINGO-SUB%20id%3D%22lingo-sub-1062126%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2012%2F12%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1062126%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2012%20December%202019%2013%3A10%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2012%2F12%2C%2012%3A52%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2012%2F12%2C%2009%3A10%20UTC%20and%20that%20during%20the%203%20Hours%20%26amp%3B%2042%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20may%20have%20experienced%20higher%20than%20expected%20latency%20or%20failures%20regarding%20metric%20alerts%20in%20East%20US%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20bad%20deployment%20in%20one%20of%20our%20backend%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%203%20Hours%20%26amp%3B%2042%20minutes%20-%2012%2F12%2C%2009%3A10%20UTC%20through%2012%2F12%2C%2012%3A52%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Anusha%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Thursday%2C%2012%20December%202019%2010%3A32%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20may%20have%20experienced%20higher%20than%20expected%20latency%20or%20failures%20regarding%20metric%20alerts%20in%20East%20US%20region%20.%20Initial%20findings%20indicate%20that%20the%20problem%20began%20at%2012%2F12%26nbsp%3B%2009%3A10%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2012%2F12%2016%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Anusha%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1062126%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 12 December 2019 13:10 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/12, 12:52 UTC. Our logs show the incident started on 12/12, 09:10 UTC and that during the 3 Hours & 42 minutes that it took to resolve the issue some of the customers may have experienced higher than expected latency or failures regarding metric alerts in East US region.
  • Root Cause: The failure was due to bad deployment in one of our backend service.
  • Incident Timeline: 3 Hours & 42 minutes - 12/12, 09:10 UTC through 12/12, 12:52 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Anusha

Update: Thursday, 12 December 2019 10:32 UTC

We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers may have experienced higher than expected latency or failures regarding metric alerts in East US region . Initial findings indicate that the problem began at 12/12  09:10 UTC. We currently have no estimate for resolution.
  • Work Around: None
  • Next Update: Before 12/12 16:00 UTC
-Anusha