%3CLINGO-SUB%20id%3D%22lingo-sub-1960675%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2012%2F06%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1960675%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Sunday%2C%2006%20December%202020%2019%3A44%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2012%2F6%2C%2019%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2012%2F6%2C%2018%3A08%20UTC%20and%20that%20during%20the%201%20hour%2022%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20in%20Australia%20Central%20may%20have%20experienced%20data%20access%20issues%20which%20could%20have%20resulted%20in%20missed%20or%20false%20alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20upgrade%20to%20a%20back%20end%20service%20that%20failed.%20The%20system%20was%20rolled%20back%20to%20mitigate%20the%20issue.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2022%20minutes%20-%2012%2F6%2C%2018%3A08%20UTC%26nbsp%3Bthrough%2012%2F6%2C%2019%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Ian%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Sunday%2C%2006%20December%202020%2019%3A17%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20an%20update%20to%20a%20backend%20service%20in%20Australia%20Central%20which%20is%20impacting%20data%20access.%20To%20address%20this%20issue%20we%20are%20rolling%20back%20the%20update.%26nbsp%3B%20Some%20customers%20may%20experience%20data%20access%20issues%20along%20with%20failed%20or%20misfired%20alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2012%2F06%2022%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Ian%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1960675%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Sunday, 06 December 2020 19:44 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/6, 19:30 UTC. Our logs show the incident started on 12/6, 18:08 UTC and that during the 1 hour 22 minutes that it took to resolve the issue customers in Australia Central may have experienced data access issues which could have resulted in missed or false alerts.
  • Root Cause: The failure was due to an upgrade to a back end service that failed. The system was rolled back to mitigate the issue.
  • Incident Timeline: 1 Hours & 22 minutes - 12/6, 18:08 UTC through 12/6, 19:30 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Ian

Update: Sunday, 06 December 2020 19:17 UTC

Root cause has been isolated to an update to a backend service in Australia Central which is impacting data access. To address this issue we are rolling back the update.  Some customers may experience data access issues along with failed or misfired alerts.
  • Work Around: none
  • Next Update: Before 12/06 22:30 UTC
-Ian