%3CLINGO-SUB%20id%3D%22lingo-sub-1773425%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20issue%20in%20Azure%20Monitor%20Essentials%20-%2010%2F13%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1773425%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2013%20October%202020%2005%3A42%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F13%2C%2005%3A25%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F12%2C%2023%3A55%20UTC%20and%20that%20during%20the%205%20hours%20%26amp%3B%2030%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%2C%20some%20customers%20may%20have%20experienced%20delay%20in%20platform%20metrics%20delivered%20to%20customer%20storage%2C%20customer%20Event%20Hub%2C%20or%20Log%20Analytics%20via%20Diagnostic%20Settings.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%26nbsp%3B%3CSPAN%20class%3D%22NormalTextRun%20SCXO124855757%20BCX8%22%20style%3D%22background-color%3A%20inherit%3B%22%3EThe%20failure%20was%20due%20to%20configuration%20issues%20with%20one%20of%20our%20dependent%20services%3C%2FSPAN%3E%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%205%20Hours%20%26amp%3B%2030%20minutes%20-%2010%2F12%2C%2023%3A55%20UTC%20through%2010%2F13%2C%2005%3A25%20UTC%3C%2FLI%3E%3C%2FUL%3ESome%20customers%20rely%20on%20Azure%20Monitor%20Essentials%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Deepika%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2013%20October%202020%2005%3A17%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Azure%20Monitor%20Essentials%20and%20are%20actively%20investigating.%20Some%20customer%20using%20Azure%20Monitor%20may%20experience%20delay%20in%20platform%20metrics%20delivered%20to%20customer%20storage%2C%20customer%20Event%20Hub%2C%20or%20Log%20Analytics%20via%20Diagnostic%20Settings%20across%20all%20regions.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F13%2009%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Rama%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1773425%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 13 October 2020 05:42 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/13, 05:25 UTC. Our logs show the incident started on 10/12, 23:55 UTC and that during the 5 hours & 30 minutes that it took to resolve the issue, some customers may have experienced delay in platform metrics delivered to customer storage, customer Event Hub, or Log Analytics via Diagnostic Settings.
  • Root CauseThe failure was due to configuration issues with one of our dependent services
  • Incident Timeline: 5 Hours & 30 minutes - 10/12, 23:55 UTC through 10/13, 05:25 UTC
Some customers rely on Azure Monitor Essentials as a critical service and apologize for any impact this incident caused.

-Deepika

Initial Update: Tuesday, 13 October 2020 05:17 UTC

We are aware of issues within Azure Monitor Essentials and are actively investigating. Some customer using Azure Monitor may experience delay in platform metrics delivered to customer storage, customer Event Hub, or Log Analytics via Diagnostic Settings across all regions.
  • Next Update: Before 10/13 09:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Rama