%3CLINGO-SUB%20id%3D%22lingo-sub-2093636%22%20slang%3D%22en-US%22%3EExperienced%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2001%2F25%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2093636%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Monday%2C%2025%20January%202021%2011%3A22%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2001%2F25%2C%2011%3A11%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2001%2F25%2C%2009%3A00%20UTC%20and%20that%20during%20the%202%20hours%20and%2011%26nbsp%3B%3CSPAN%20style%3D%22color%3A%20rgb(51%2C%2051%2C%2051)%3B%20font-family%3A%20SegoeUI%2C%20Lato%2C%20%22%20helvetica%3D%22%22%20neue%3D%22%22%3Eminutes%3C%2FSPAN%3E%3CSPAN%20style%3D%22color%3A%20rgb(51%2C%2051%2C%2051)%3B%20font-family%3A%20SegoeUI%2C%20Lato%2C%20%22%20helvetica%3D%22%22%20neue%3D%22%22%3E%26nbsp%3B%3C%2FSPAN%3Ethat%20it%20took%20to%20resolve%20the%20issue%20some%20of%20customers%20may%20experienced%20intermittent%20data%20latency%20and%20incorrect%20alert%20activation%20in%20East%20US2%20region%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20unhealthy%20backend%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2011%20minutes%20-%2001%2F25%2C%2009%3A00%20UTC%20through%2001%2F25%2C%2011%3A11%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Madhav%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2093636%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Monday, 25 January 2021 11:22 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/25, 11:11 UTC. Our logs show the incident started on 01/25, 09:00 UTC and that during the 2 hours and 11 minutes that it took to resolve the issue some of customers may experienced intermittent data latency and incorrect alert activation in East US2 region
  • Root Cause: The failure was due to unhealthy backend service.
  • Incident Timeline: 2 Hours & 11 minutes - 01/25, 09:00 UTC through 01/25, 11:11 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Madhav