Home
%3CLINGO-SUB%20id%3D%22lingo-sub-623033%22%20slang%3D%22en-US%22%3EExperiencing%20Alerting%20failure%20for%20Log%20Search%20Alerts%20-%2005%2F21%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-623033%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2021%20May%202019%2013%3A44%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%26nbsp%3B5%2F21%2C%2012%3A41%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%205%2F21%2C%2006%3A41%20UTC%20and%20that%20during%20the%206%20hours%20and%2032%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20in%20EUS%2C%20WCUS%20and%20France%20central%20experienced%20alerting%20failures.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20the%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%206%20Hours%20%26amp%3B%2032%20minutes%20-%205%2F21%2C%2006%3A09%20UTC%20through%205%2F21%2C%2012%3A41%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Madhuri%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Tuesday%2C%2021%20May%202019%2012%3A48%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20failure%20in%20one%20of%20our%20dependent%20services%20which%20was%20impacting%20Alerting%20service.%20To%20address%20this%20issue%20we%20are%20working%20on%20bringing%20the%20service%20back.%20Some%20customers%20in%20WCUS%2C%20EUS%20and%20France%20Central%20may%20experience%20alerting%20failures.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2005%2F21%2017%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Madhuri%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-623033%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 21 May 2019 13:44 UTC

We've confirmed that all systems are back to normal with no customer impact as of 5/21, 12:41 UTC. Our logs show the incident started on 5/21, 06:41 UTC and that during the 6 hours and 32 minutes that it took to resolve the issue customers in EUS, WCUS and France central experienced alerting failures.
  • Root Cause: The failure was due to the issue in one of our dependent services.
  • Incident Timeline: 6 Hours & 32 minutes - 5/21, 06:09 UTC through 5/21, 12:41 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Madhuri

Update: Tuesday, 21 May 2019 12:48 UTC

Root cause has been isolated to failure in one of our dependent services which was impacting Alerting service. To address this issue we are working on bringing the service back. Some customers in WCUS, EUS and France Central may experience alerting failures.
  • Work Around: None
  • Next Update: Before 05/21 17:00 UTC
-Madhuri