%3CLINGO-SUB%20id%3D%22lingo-sub-1233524%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20East%20US%20-%2003%2F17%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1233524%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Wednesday%2C%2018%20March%202020%2006%3A02%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2003%2F17%2C%2023%3A10%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2003%2F17%2C%2017%3A46%20UTC%20and%20that%20during%20the%205%20hours%20and%2024%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20in%20East%20US%20region%20may%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%205%20Hours%20%26amp%3B%2024%20minutes%20-%203%2F17%2C%2017%3A46%20UTC%20through%203%2F17%2C%2023%3A10%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Madhuri%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Wednesday%2C%2018%20March%202020%2001%3A43%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Application%20Insights.%20Engineers%20of%20the%20backend%20dependency%20are%20continuing%20their%20investigation%20to%20find%20the%20root%20cause.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F18%2014%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Eric%20Singleton%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Tuesday%2C%2017%20March%202020%2021%3A43%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Application%20Insights.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20The%20issue%20has%20been%20confirmed%20to%20be%20related%20to%20a%20backend%20dependency.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F18%2002%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Eric%20Singleton%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2017%20March%202020%2019%3A36%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20are%20actively%20investigating.%20Customers%20ingestion%20telemetry%20in%20East%20US%20geographical%20region%20during%2017%3A00%20UTC%20and%2018%3A10%20UTC%20may%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F17%2022%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Eric%20Singleton%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1233524%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 18 March 2020 06:02 UTC

We've confirmed that all systems are back to normal with no customer impact as of 03/17, 23:10 UTC. Our logs show the incident started on 03/17, 17:46 UTC and that during the 5 hours and 24 minutes that it took to resolve the issue some in East US region may have experienced intermittent data latency, data gaps and incorrect alert activation.
  • Root Cause: The failure was due to an issue in one of our dependent services.
  • Incident Timeline: 5 Hours & 24 minutes - 3/17, 17:46 UTC through 3/17, 23:10 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Madhuri

Update: Wednesday, 18 March 2020 01:43 UTC

We continue to investigate issues within Application Insights. Engineers of the backend dependency are continuing their investigation to find the root cause.
  • Next Update: Before 03/18 14:00 UTC
-Eric Singleton

Update: Tuesday, 17 March 2020 21:43 UTC

We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. The issue has been confirmed to be related to a backend dependency.
  • Next Update: Before 03/18 02:00 UTC
-Eric Singleton

Initial Update: Tuesday, 17 March 2020 19:36 UTC

We are aware of issues within Application Insights and are actively investigating. Customers ingestion telemetry in East US geographical region during 17:00 UTC and 18:10 UTC may have experienced intermittent data latency, data gaps and incorrect alert activation.

  • Work Around: None
  • Next Update: Before 03/17 22:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Eric Singleton