Home
%3CLINGO-SUB%20id%3D%22lingo-sub-886906%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2010%2F01%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-886906%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2001%20October%202019%2018%3A35%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F1%2C%2018%3A12%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F1%2C%2016%3A30%20UTC%20and%20that%20during%20the%201%20hour%20and%2042%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%205.5%25%20of%20customers%20experienced%20query%20failures%20for%20log%20analytics%20data%20as%20well%20as%20possible%20unexpected%20missing%20or%20triggered%20log%20search%20alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20a%20result%20of%20unexpected%20errors%20on%20several%20backend%20processor%20nodes%20reducing%20the%20overall%20availability%20of%20the%20resource%20to%20process%20requests.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2042%20minutes%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2001%20October%202019%2017%3A55%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20be%20experiencing%20query%20failures%20and%20delayed%20ingestion%20in%20East%20US%20region.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F01%2020%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-886906%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 01 October 2019 18:35 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/1, 18:12 UTC. Our logs show the incident started on 10/1, 16:30 UTC and that during the 1 hour and 42 minutes that it took to resolve the issue 5.5% of customers experienced query failures for log analytics data as well as possible unexpected missing or triggered log search alerts.
  • Root Cause: The failure was a result of unexpected errors on several backend processor nodes reducing the overall availability of the resource to process requests.
  • Incident Timeline: 1 Hours & 42 minutes
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jeff

Initial Update: Tuesday, 01 October 2019 17:55 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may be experiencing query failures and delayed ingestion in East US region.
  • Next Update: Before 10/01 20:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff