%3CLINGO-SUB%20id%3D%22lingo-sub-1925440%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2011%2F23%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1925440%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2024%20November%202020%2001%3A21%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2011%2F24%2C%2001%3A23%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2011%2F23%2C%2019%3A28%20UTC%20and%20that%20during%20the%205%20hours%20and%2055%20Minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20in%20East%20US%20Region%20using%20Azure%20Log%20Analytics%20may%20have%20experienced%20issues%20with%20log%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20issues%20in%20one%20of%20the%20backend%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%205%20Hours%20%26amp%3B%2055%20minutes%20-%2011%2F23%2C%2019%3A28%20UTC%20through%2011%2F24%2C%2001%3A23%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jayadev%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Monday%2C%2023%20November%202020%2022%3A36%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20in%20East%20US%20may%20experience%20log%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%20Start%20time%20for%20the%20issue%20is%20determined%20to%20be%20on%2011%2F23%20at%2019%3A28%20UTC.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2011%2F24%2003%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jayadev%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1925440%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 24 November 2020 01:21 UTC

We've confirmed that all systems are back to normal with no customer impact as of 11/24, 01:23 UTC. Our logs show the incident started on 11/23, 19:28 UTC and that during the 5 hours and 55 Minutes that it took to resolve the issue some customers in East US Region using Azure Log Analytics may have experienced issues with log data latency, data gaps and incorrect alert activation.
  • Root Cause: The failure was due to issues in one of the backend services.
  • Incident Timeline: 5 Hours & 55 minutes - 11/23, 19:28 UTC through 11/24, 01:23 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jayadev

Initial Update: Monday, 23 November 2020 22:36 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers in East US may experience log data latency, data gaps and incorrect alert activation. Start time for the issue is determined to be on 11/23 at 19:28 UTC.
  • Next Update: Before 11/24 03:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jayadev