%3CLINGO-SUB%20id%3D%22lingo-sub-1115644%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2001%2F17%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1115644%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2017%20January%202020%2023%3A01%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20in%20West%20Central%20US%20with%20no%20customer%20impact%20as%20of%201%2F17%2C%2021%3A32%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%201%2F17%2C%2021%3A11%20UTC%20and%20that%20during%20the%2021%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%2028%25%20of%20customers%20in%20West%20Central%20US%20experienced%20possible%20latent%20data%20ingest%2C%20query%20failures%20and%2For%20timeouts%20and%20unexpected%20alerts%20being%20fired%20or%20no%20alert%20being%20fired.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20maintenance%20process%20on%20the%20back%20end%20running%20longer%20than%20expected.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2021%20minutes%20-%201%2F17%2C%2021%3A11%20UTC%20through%201%2F17%2C%2021%3A32%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2017%20January%202020%2021%3A46%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20in%20the%20West%20US%20region%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20failed%20queries%20for%20data%20and%20delayed%2C%20misfired%2C%20or%20unfired%20alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2001%2F17%2023%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1115644%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 17 January 2020 23:01 UTC

We've confirmed that all systems are back to normal in West Central US with no customer impact as of 1/17, 21:32 UTC. Our logs show the incident started on 1/17, 21:11 UTC and that during the 21 minutes that it took to resolve the issue 28% of customers in West Central US experienced possible latent data ingest, query failures and/or timeouts and unexpected alerts being fired or no alert being fired.
  • Root Cause: The failure was due to a maintenance process on the back end running longer than expected.
  • Incident Timeline: 21 minutes - 1/17, 21:11 UTC through 1/17, 21:32 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jeff

Initial Update: Friday, 17 January 2020 21:46 UTC

We are aware of issues within Log Analytics in the West US region and are actively investigating. Some customers may experience failed queries for data and delayed, misfired, or unfired alerts.
  • Work Around: None
  • Next Update: Before 01/17 23:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff