%3CLINGO-SUB%20id%3D%22lingo-sub-948662%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issue%20for%20Log%20Search%20Alerts%20in%20East%20US%20-%2010%2F25%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-948662%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2025%20October%202019%2001%3A31%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F25%2C%2001%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F24%2C%2022%3A49%20UTC%20and%20that%20during%20the%20two%20hours.%2041%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20in%20the%20East%20US%20region%20may%20have%20experienced%20delay%20in%20their%20Log%20Search%20Alerts.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20unusually%20high%20number%20of%20alerts%20that%20led%20to%20the%20development%20of%20a%20queue.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2041%20minutes%20-%2010%2F24%2C%2022%3A49%26nbsp%3BUTC%20through%2010%2F25%2C%2001%3A30%26nbsp%3BUTC%3CBR%20%2F%3E%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jack%20Cantwell%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2025%20October%202019%2000%3A01%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20in%20the%20East%20US%20region%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20alert%20latency.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%20at%20this%20time%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F25%2001%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jack%20Cantwell%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-948662%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 25 October 2019 01:31 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/25, 01:30 UTC. Our logs show the incident started on 10/24, 22:49 UTC and that during the two hours. 41 minutes that it took to resolve the issue customers in the East US region may have experienced delay in their Log Search Alerts.
  • Root Cause: The failure was due to an unusually high number of alerts that led to the development of a queue.
  • Incident Timeline: 2 Hours & 41 minutes - 10/24, 22:49 UTC through 10/25, 01:30 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Jack Cantwell

Initial Update: Friday, 25 October 2019 00:01 UTC

We are aware of issues within Log Search Alerts in the East US region and are actively investigating. Some customers may experience alert latency.
  • Work Around: none at this time
  • Next Update: Before 10/25 01:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jack Cantwell