%3CLINGO-SUB%20id%3D%22lingo-sub-967195%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20for%20Application%20Insights%20components%20in%20EUS%20-%2010%2F31%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-967195%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2031%20October%202019%2001%3A08%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F31%2C%2001%3A00%20UTC.%20Some%20limited%20number%20of%20customers%20in%20East%20US%20may%20see%20data%20loss%20during%20the%20impact%20window.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20backend%20service%20entering%20an%20unhealthy%20state.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%200%20minutes%20-%2010%2F30%2C%2023%3A00%20UTC%20through%2010%2F31%2C%201%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Matt%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Thursday%2C%2031%20October%202019%2000%3A41%20UTC%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%3EStarting%20at%2023%3A00%20UTC%20on%20October%2030%2C%20customers%20using%20Applications%20Insights%20in%20East%20US%20may%20have%20begun%20to%20experience%20intermittent%20ingestion%20latency%20or%20may%20see%20gaps%20in%20data%20in%20the%20portal.%20Engineers%20understand%20the%20root%20cause%20and%20are%20redeploying%20a%20back%20end%20service%20to%20attempt%20to%20mitigate.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None.%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%202%3A48%20UTC%3C%2FLI%3E%3C%2FUL%3E-Matt%3C%2FDIV%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-967195%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 31 October 2019 01:08 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/31, 01:00 UTC. Some limited number of customers in East US may see data loss during the impact window.
  • Root Cause: The failure was due to a backend service entering an unhealthy state.
  • Incident Timeline: 2 Hours & 0 minutes - 10/30, 23:00 UTC through 10/31, 1:00 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Matt

Update: Thursday, 31 October 2019 00:41 UTC

Starting at 23:00 UTC on October 30, customers using Applications Insights in East US may have begun to experience intermittent ingestion latency or may see gaps in data in the portal. Engineers understand the root cause and are redeploying a back end service to attempt to mitigate.
  • Work Around: None.
  • Next Update: Before 2:48 UTC
-Matt