%3CLINGO-SUB%20id%3D%22lingo-sub-1328180%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20and%20Delayed%20Alerts%20in%20North%20Central%20US%20-%2004%2F22%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1328180%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Wednesday%2C%2022%20April%202020%2010%3A09%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%26nbsp%3B%26nbsp%3Bimpact%20as%20of%204%2F22%2C08%3A40%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%204%2F22%2C%2006%3A50%20UTC%20and%20that%20during%20the%201%20hour%20and%2050%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20in%20North%20Central%20US%20region%20may%20have%20experienced%20Data%20Latency%2C%20Delayed%20or%20missed%20log%20search%20alerts%20and%20issue%20performing%20CRUD%20operations%20against%20Log%20Analytics%20Workspace.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hour%20%26amp%3B%2050%20minutes%20-%204%2F22%2C%2006%3A50%20UTC%20through%204%2F22%2C%2008%3A40%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20and%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Madhuri%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1328180%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 22 April 2020 10:09 UTC

We've confirmed that all systems are back to normal with no customer  impact as of 4/22,08:40 UTC. Our logs show the incident started on 4/22, 06:50 UTC and that during the 1 hour and 50 minutes that it took to resolve the issue some customers in North Central US region may have experienced Data Latency, Delayed or missed log search alerts and issue performing CRUD operations against Log Analytics Workspace.
  • Root Cause: The failure was due to an issue in one of our dependent services.
  • Incident Timeline: 1 Hour & 50 minutes - 4/22, 06:50 UTC through 4/22, 08:40 UTC
We understand that customers rely on Log Search Alerts and Log Analytics as a critical service and apologize for any impact this incident caused.

-Madhuri