%3CLINGO-SUB%20id%3D%22lingo-sub-1282068%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2004%2F05%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1282068%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Sunday%2C%2005%20April%202020%2009%3A07%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2004%2F05%2C%2007%3A20%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2004%2F05%2C%2003%3A55%20UTC%20and%20that%20during%20the%203%20hours%2025%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20might%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation%20in%20the%20East%20US%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20because%20of%20one%20of%20our%20dependent%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%203%20Hours%20%26amp%3B%2025%20minutes%20-%2004%2F05%2C%2003%3A55%20UTC%20through%2004%2F05%2C%2007%3A20%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E-Syed%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Sunday%2C%2005%20April%202020%2006%3A52%20UTCWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20are%20actively%20investigating.%20Some%20customers%20using%20Application%20Insights%20in%20East%20US%20region%20may%20experience%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2004%2F05%2011%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Mohini%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1282068%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Sunday, 05 April 2020 09:07 UTC

We've confirmed that all systems are back to normal with no customer impact as of 04/05, 07:20 UTC. Our logs show the incident started on 04/05, 03:55 UTC and that during the 3 hours 25 minutes that it took to resolve the issue some customers might have experienced intermittent data latency, data gaps and incorrect alert activation in the East US region.
  • Root Cause: The failure was because of one of our dependent service.
  • Incident Timeline: 3 Hours & 25 minutes - 04/05, 03:55 UTC through 04/05, 07:20 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
-Syed

Initial Update: Sunday, 05 April 2020 06:52 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers using Application Insights in East US region may experience intermittent data latency, data gaps and incorrect alert activation.
  • Work Around: None
  • Next Update: Before 04/05 11:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Mohini