%3CLINGO-SUB%20id%3D%22lingo-sub-1530045%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Gaps%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2007%2F17%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1530045%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2017%20July%202020%2021%3A11%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20as%20of%2007%2F17%2C%2020%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2007%2F17%2C%2019%3A20%20UTC%20and%20that%20during%20the%201%20hour%20and%2010%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20with%20Application%20Insights%20resources%20in%20South%20Central%20US%20geographical%20region%20may%20have%20experienced%20intermittent%20metrics%20data%20gaps%20and%20incorrect%20alert%20activation.%26nbsp%3B%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20hour%20and%2010%20minutes%26nbsp%3B-%2007%2F17%2C%2019%3A20%20UTC%26nbsp%3Bthrough%2007%2F17%2C%2020%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Sindhu%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2017%20July%202020%2020%3A14%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20are%20actively%20investigating.%20Some%20customers%20with%20Application%20Insights%20resources%20in%20South%20Central%20US%20geographical%20region%20may%20experience%20intermittent%20metrics%20data%20gaps%20and%20incorrect%20alert%20activation.%26nbsp%3B%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2007%2F17%2022%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E%3CBR%20%2F%3E-Sindhu%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1530045%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 17 July 2020 21:11 UTC

We've confirmed that all systems are back to normal as of 07/17, 20:30 UTC. Our logs show the incident started on 07/17, 19:20 UTC and that during the 1 hour and 10 minutes that it took to resolve the issue some customers with Application Insights resources in South Central US geographical region may have experienced intermittent metrics data gaps and incorrect alert activation. 
  • Root Cause: The failure was due to an issue in one of our dependent services.
  • Incident Timeline: 1 hour and 10 minutes - 07/17, 19:20 UTC through 07/17, 20:30 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Sindhu

Initial Update: Friday, 17 July 2020 20:14 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers with Application Insights resources in South Central US geographical region may experience intermittent metrics data gaps and incorrect alert activation. 
  • Work Around: None
  • Next Update: Before 07/17 22:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Sindhu