%3CLINGO-SUB%20id%3D%22lingo-sub-1628270%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Ingestion%20Latency%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2009%2F02%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1628270%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2003%20September%202020%2000%3A03%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2009%2F02%2C%2023%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2009%2F02%2C%2014%3A40%20UTC%20and%20that%20during%20the%208%20hours%20and%2050%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20in%20East%20US%20region%20might%20have%20experienced%20increased%20latency%20or%20incorrect%20alert%20activation%20for%20the%20Heartbeat%2C%20Perf%20or%20Security%20query%20types.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20back-end%20dependency%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%26nbsp%3B%208%20Hours%20%26amp%3B%2050%20minutes%20-%2009%2F02%2C%2014%3A40%20UTC%20through%2009%2F02%2C%2023%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Saika%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Wednesday%2C%2002%20September%202020%2023%3A11%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20continue%20to%20experience%20intermittent%20Data%20Latency%20and%20incorrect%20alert%20activation%20in%20East%20US%20region.%20We%20are%20working%20to%20establish%20the%20start%20time%20for%20the%20issue%2C%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2009%2F02%2014%3A40%20UTC.%20Engineering%20Team%20is%20working%20on%20a%20fix%20in%20the%20system%20to%20stabilize%20the%20latencies%20in%20the%20region.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F03%2002%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Saika%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Wednesday%2C%2002%20September%202020%2019%3A12%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20continue%20to%20experience%20intermittent%20Data%20Latency%20and%20incorrect%20alert%20activation%20in%20East%20US%20region.%20We%20are%20working%20to%20establish%20the%20start%20time%20for%20the%20issue%2C%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2009%2F02%2014%3A40%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F02%2023%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Saika%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Wednesday%2C%2002%20September%202020%2016%3A36%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20intermittent%20Data%20Latency%20and%20incorrect%20alert%20activation%20for%20Heartbeat%2C%26nbsp%3BPerf%20or%20Security%20query%20types%26nbsp%3Bin%20East%20US%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F02%2020%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Saika%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1628270%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 03 September 2020 00:03 UTC

We've confirmed that all systems are back to normal with no customer impact as of 09/02, 23:30 UTC. Our logs show the incident started on 09/02, 14:40 UTC and that during the 8 hours and 50 minutes that it took to resolve the issue some of the customers in East US region might have experienced increased latency or incorrect alert activation for the Heartbeat, Perf or Security query types.
  • Root Cause: The failure was due to a back-end dependency service.
  • Incident Timeline:  8 Hours & 50 minutes - 09/02, 14:40 UTC through 09/02, 23:30 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Saika

Update: Wednesday, 02 September 2020 23:11 UTC

We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers continue to experience intermittent Data Latency and incorrect alert activation in East US region. We are working to establish the start time for the issue, initial findings indicate that the problem began at 09/02 14:40 UTC. Engineering Team is working on a fix in the system to stabilize the latencies in the region.
  • Work Around: None
  • Next Update: Before 09/03 02:30 UTC
-Saika

Update: Wednesday, 02 September 2020 19:12 UTC

We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers continue to experience intermittent Data Latency and incorrect alert activation in East US region. We are working to establish the start time for the issue, initial findings indicate that the problem began at 09/02 14:40 UTC. We currently have no estimate for resolution.
  • Next Update: Before 09/02 23:30 UTC
-Saika

Initial Update: Wednesday, 02 September 2020 16:36 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience intermittent Data Latency and incorrect alert activation for Heartbeat, Perf or Security query types in East US region.
  • Work Around: None
  • Next Update: Before 09/02 20:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Saika