%3CLINGO-SUB%20id%3D%22lingo-sub-925697%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20for%20Log%20Search%20Alerts%20in%20West%20Europe%20-%2010%2F22%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-925697%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2022%20October%202019%2007%3A49%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F22%2C%2004%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F21%2C%2023%3A20%20UTC%20and%20that%20during%20the%205%20hours%2010%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20would%20have%20experienced%20Data%20latency%2C%20Data%20access%20and%20Alerting%20failures%20in%20West%20Europe%20region.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20connections%20getting%20timed%20out%20with%20our%20Azure%20storage%20in%20West%20Europe%20region..%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%205%20Hours%20%26amp%3B%2010%20minutes%20-%2010%2F21%2C%2023%3A20%20UTC%20through%2010%2F22%2C%2004%3A30%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Rama%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Tuesday%2C%2022%20October%202019%2005%3A01%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20one%20Azure%20Storage%20scale%20unit%20in%20West%20Europe%20which%20has%20impacted%20many%20services%2C%20including%20Log%20Search%20Alert.%20Mitigation%20is%20in%20process%20we%20estimate%20at%20least%202%20hours%20before%20all%20latency%20is%20addressed.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F22%2008%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Rama%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Tuesday%2C%2022%20October%202019%2002%3A57%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20one%20Azure%20Storage%20scale%20unit%20in%20West%20Europe%20which%20has%20impacted%20many%20services%2C%20including%20Log%20Search%20Alert.%20Mitigation%20is%20in%20process%20we%20estimate%20at%20least%202%20hours%20before%20all%20latency%20is%20addressed.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%20as%20of%20now%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F22%2005%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E-Jack%20Cantwell%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2022%20October%202019%2001%3A20%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20data%20latency%20in%20the%20West%20Europe%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20none%20at%20this%20time%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F22%2003%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3E-Jack%20Cantwell%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-925697%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 22 October 2019 07:49 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/22, 04:30 UTC. Our logs show the incident started on 10/21, 23:20 UTC and that during the 5 hours 10 minutes that it took to resolve the issue customers would have experienced Data latency, Data access and Alerting failures in West Europe region.
  • Root Cause: The failure was due to connections getting timed out with our Azure storage in West Europe region..
  • Incident Timeline: 5 Hours & 10 minutes - 10/21, 23:20 UTC through 10/22, 04:30 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Rama

Update: Tuesday, 22 October 2019 05:01 UTC

Root cause has been isolated to one Azure Storage scale unit in West Europe which has impacted many services, including Log Search Alert. Mitigation is in process we estimate at least 2 hours before all latency is addressed.
  • Work Around: None
  • Next Update: Before 10/22 08:30 UTC
-Rama

Update: Tuesday, 22 October 2019 02:57 UTC

Root cause has been isolated to one Azure Storage scale unit in West Europe which has impacted many services, including Log Search Alert. Mitigation is in process we estimate at least 2 hours before all latency is addressed.
  • Work Around: none as of now
  • Next Update: Before 10/22 05:00 UTC
-Jack Cantwell

Initial Update: Tuesday, 22 October 2019 01:20 UTC

We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience data latency in the West Europe region.
  • Work Around: none at this time
  • Next Update: Before 10/22 03:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.

-Jack Cantwell