%3CLINGO-SUB%20id%3D%22lingo-sub-1132672%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2001%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1132672%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2028%20January%202020%2009%3A39%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2001%2F28%2C%2009%3A30%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2001%2F28%2C%2005%3A59%20UTC%20and%20that%20during%20the%203%20hours%2031%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20ingesting%20telemetry%20in%20Southeast%20and%20East%20Asia%20geographical%20region%20during%20impact%20window%20may%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20issues%20with%20one%20of%20our%20dependent%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%203%20Hours%20%26amp%3B%2031%20minutes%20-%2001%2F28%2C%2005%3A59%20UTC%20through%2001%2F28%2C%2009%3A30%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Mohini%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1132672%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 28 January 2020 09:39 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/28, 09:30 UTC. Our logs show the incident started on 01/28, 05:59 UTC and that during the 3 hours 31 minutes that it took to resolve the issue customers ingesting telemetry in Southeast and East Asia geographical region during impact window may have experienced intermittent data latency, data gaps and incorrect alert activation.
  • Root Cause: The failure was due to issues with one of our dependent service.
  • Incident Timeline: 3 Hours & 31 minutes - 01/28, 05:59 UTC through 01/28, 09:30 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Mohini