%3CLINGO-SUB%20id%3D%22lingo-sub-1982413%22%20slang%3D%22en-US%22%3EExperiencing%20failures%20while%20querying%20API's%20in%20Azure%20Activity%20Logs%20-%2012%2F12%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1982413%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Saturday%2C%2012%20December%202020%2016%3A46%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2012%2F12%2C%2016%3A15%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2012%2F11%2C%20~12%3A30%20UTC%20and%20that%20during%20the%2027%20hours%2045%20min%20that%20it%20took%20to%20resolve%20the%20issue%20a%20subset%20of%20customers%20experienced%20failures%20while%20querying%20API's%20in%20Azure%20Activity%20Logs.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20one%20of%20the%20backend%20services%20reaching%20operational%20threshold%20and%20becoming%20unhealthy.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2027%20Hours%20%26amp%3B%2045%20minutes%20-%2012%2F11%2C%2012%3A30%20UTC%20through%2012%2F12%2C%2016%3A15%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Activity%20Logs%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Anupama%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Saturday%2C%2012%20December%202020%2015%3A17%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Activity%20Logs.%20Some%20customers%20may%20experience%20failures%20while%20querying%20API's%20in%20Azure%20Activity%20Logs.%20Our%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2012%2F11%20~12%3A30%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2012%2F12%2019%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Sandeep%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1982413%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EActivity%20Log%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Saturday, 12 December 2020 16:46 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/12, 16:15 UTC. Our logs show the incident started on 12/11, ~12:30 UTC and that during the 27 hours 45 min that it took to resolve the issue a subset of customers experienced failures while querying API's in Azure Activity Logs.
  • Root Cause: The failure was due to one of the backend services reaching operational threshold and becoming unhealthy.
  • Incident Timeline: 27 Hours & 45 minutes - 12/11, 12:30 UTC through 12/12, 16:15 UTC
We understand that customers rely on Azure Activity Logs as a critical service and apologize for any impact this incident caused.

-Anupama

Update: Saturday, 12 December 2020 15:17 UTC

We continue to investigate issues within Activity Logs. Some customers may experience failures while querying API's in Azure Activity Logs. Our initial findings indicate that the problem began at 12/11 ~12:30 UTC. We currently have no estimate for resolution.
  • Work Around: None
  • Next Update: Before 12/12 19:30 UTC
-Sandeep