%3CLINGO-SUB%20id%3D%22lingo-sub-957835%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issue%20for%20Log%20Search%20Alerts%20in%20East%20US%20-%2010%2F27%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-957835%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Sunday%2C%2027%20October%202019%2022%3A11%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F27%2C%2022%3A06%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F27%2C%2019%3A45%20UTC%20and%20that%20during%20the%202%20hours%2C%2021%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20experienced%20latency%20in%20their%20log%20search%20alerts.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20problem%20with%20a%20back%20end%20component.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2021%20minutes%20-%2010%2F27%2C%2019%3A45%26nbsp%3BUTC%20through%2010%2F27%2C%2022%3A06%26nbsp%3BUTC%3CBR%20%2F%3E%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jack%20Cantwell%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Sunday%2C%2027%20October%202019%2020%3A59%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20a%20back%20end%20component%20which%20was%20impacting%20Log%20Search%20Alerts%20in%20East%20US.%20To%20address%20this%20issue%20we%20are%20redeploying%20the%20component.%20Some%20customers%20may%20continue%20to%20experience%20latency%20in%20log%20search%20alerts%20and%20we%20estimate%2030%20minutes%20before%20all%20latency%20is%20addressed.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%20at%20this%20time%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F27%2022%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Jack%20Cantwell%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Sunday%2C%2027%20October%202019%2019%3A54%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20and%20are%20actively%20mitigating%20the%20issue.%20In%20the%20meantime%2C%20some%20customers%20may%20experience%20latency%20in%20their%20Log%20Search%20Alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%20at%20this%20time%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F27%2021%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jack%20Cantwell%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-957835%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Sunday, 27 October 2019 22:11 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/27, 22:06 UTC. Our logs show the incident started on 10/27, 19:45 UTC and that during the 2 hours, 21 minutes that it took to resolve the issue some customers experienced latency in their log search alerts.
  • Root Cause: The failure was due to problem with a back end component.
  • Incident Timeline: 2 Hours & 21 minutes - 10/27, 19:45 UTC through 10/27, 22:06 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Jack Cantwell

Update: Sunday, 27 October 2019 20:59 UTC

Root cause has been isolated to a back end component which was impacting Log Search Alerts in East US. To address this issue we are redeploying the component. Some customers may continue to experience latency in log search alerts and we estimate 30 minutes before all latency is addressed.
  • Work Around: none at this time
  • Next Update: Before 10/27 22:00 UTC
-Jack Cantwell

Initial Update: Sunday, 27 October 2019 19:54 UTC

We are aware of issues within Log Search Alerts and are actively mitigating the issue. In the meantime, some customers may experience latency in their Log Search Alerts.
  • Work Around: none at this time
  • Next Update: Before 10/27 21:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jack Cantwell