Home
%3CLINGO-SUB%20id%3D%22lingo-sub-476648%22%20slang%3D%22en-US%22%3EExperiencing%20latency%20for%20newly%20created%20Log%20Search%20Alerts%20-%2004%2F19%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-476648%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2019%20April%202019%2023%3A57%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%204%2F19%2C%2023%3A50%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%204%2F19%2C%2019%3A00%20UTC.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20service%20entering%20an%20unhealthy%20state%20due%20to%20a%20unanticipated%20spike%20in%20traffic%20to%20the%20service.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%204%20Hours%20%26amp%3B%2050%20minutes%2C%204%2F19%2019%3A00%20UTC%20to%204%2F19%2023%3A50%20UTC.%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Matthew%20Cosner%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2019%20April%202019%2022%3A39%20UTC%3CBR%20%2F%3E%3CSPAN%20style%3D%22font-family%3A%20Helvetica%3B%22%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20and%20are%20actively%20investigating.%20Customers%20who%20created%20Log%20Search%20Alerts%20in%20the%20East%20US%20region%20after%202019-04-19%2019%3A00%26nbsp%3BUTC%20or%20in%20the%20West%20Central%26nbsp%3BUS%20region%20starting%20from%202019-04-19%2020%3A00%20UTC%20may%20not%20see%20these%20tests%20begin%20to%20run%20for%20a%20much%20longer%20period%20than%20normal.%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20Log%20Search%20Alerts%20from%20other%20regions%20are%20not%20impacted.%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2004%2F20%2001%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Matthew%20Cosner%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-476648%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 19 April 2019 23:57 UTC

We've confirmed that all systems are back to normal with no customer impact as of 4/19, 23:50 UTC. Our logs show the incident started on 4/19, 19:00 UTC.
  • Root Cause: The failure was due to a service entering an unhealthy state due to a unanticipated spike in traffic to the service.
  • Incident Timeline: 4 Hours & 50 minutes, 4/19 19:00 UTC to 4/19 23:50 UTC.
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Matthew Cosner

Initial Update: Friday, 19 April 2019 22:39 UTC

We are aware of issues within Log Search Alerts and are actively investigating. Customers who created Log Search Alerts in the East US region after 2019-04-19 19:00 UTC or in the West Central US region starting from 2019-04-19 20:00 UTC may not see these tests begin to run for a much longer period than normal.

  • Work Around: Log Search Alerts from other regions are not impacted.
  • Next Update: Before 04/20 01:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Matthew Cosner