%3CLINGO-SUB%20id%3D%22lingo-sub-1722894%22%20slang%3D%22en-US%22%3EExperiencing%20Alerting%20Latency%20for%20Activity%20Log%20Alerts%20-%2009%2F29%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1722894%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2029%20September%202020%2007%3A52%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2009%2F29%2C%2007%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2009%2F28%2C%2023%3A00%20UTC%20and%20that%20during%20the%26nbsp%3B%208%20hours%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20may%20have%20experienced%20delayed%20Activity%20Log%20alerts%2C%20however%20Alerts%20would%20still%20have%20fired%20during%20this%20time%2C%20but%20may%20have%20been%20delayed%20by%20up%20to%206%20hours.%3C%2FDIV%3E%3CDIV%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20We%20determined%20that%20one%20of%20the%20back-end%20services%20using%20Log%20Alerts%20caused%20an%20unexpected%20large%20increase%20in%20Log%20Alert%20activity%2C%20which%20created%20a%20backlog%20and%20caused%20the%20delays%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%208%20Hours%26nbsp%3B%20-%2009%2F28%2C%2023%3A00%20UTC%20through%2009%2F29%2C%2007%3A00%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Activity%20Log%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Vamshi%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2029%20September%202020%2006%3A47%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Activity%20Log%20Alerts%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20delayed%20Activity%20Log%20Alerts.%20Alerts%20would%20have%20eventually%20fired.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F29%2009%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Vamshi%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1722894%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EActivity%20Log%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 29 September 2020 07:52 UTC

We've confirmed that all systems are back to normal with no customer impact as of 09/29, 07:00 UTC. Our logs show the incident started on 09/28, 23:00 UTC and that during the  8 hours that it took to resolve the issue some customers may have experienced delayed Activity Log alerts, however Alerts would still have fired during this time, but may have been delayed by up to 6 hours.
  • Root Cause: We determined that one of the back-end services using Log Alerts caused an unexpected large increase in Log Alert activity, which created a backlog and caused the delays
  • Incident Timeline: 8 Hours  - 09/28, 23:00 UTC through 09/29, 07:00 UTC
We understand that customers rely on Activity Log Alerts as a critical service and apologize for any impact this incident caused.

-Vamshi

Initial Update: Tuesday, 29 September 2020 06:47 UTC

We are aware of issues within Activity Log Alerts and are actively investigating. Some customers may experience delayed Activity Log Alerts. Alerts would have eventually fired.
  • Work Around: None
  • Next Update: Before 09/29 09:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Vamshi