%3CLINGO-SUB%20id%3D%22lingo-sub-1424369%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20in%20East%20US%20region%20-%2005%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1424369%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22%22%3E%3CU%20style%3D%22font-size%3A%2014px%3B%22%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2028%20May%202020%2016%3A32%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3E%3CDIV%20style%3D%22text-align%3A%20left%3B%22%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2005%2F28%2C%2016%3A05%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2005%2F28%2C%2015%3A10%20UTC%20and%20that%20during%20the%2055%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%2034%25%20of%20customers%20in%20the%20East%20US%20region%20experienced%20data%20latency%2C%20gaps%20in%20data%20and%20incorrect%20alert%20activation.%3C%2FDIV%3E%3CDIV%20style%3D%22text-align%3A%20left%3B%22%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20delay%20in%20processing%20data%20in%20a%20back%20end%20system.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2055%20minutes%20-%205%2F28%2C%2015%3A10%20UTC%20through%205%2F28%2C%2016%3A05%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jack%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1424369%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 28 May 2020 16:32 UTC

We've confirmed that all systems are back to normal with no customer impact as of 05/28, 16:05 UTC. Our logs show the incident started on 05/28, 15:10 UTC and that during the 55 minutes that it took to resolve the issue 34% of customers in the East US region experienced data latency, gaps in data and incorrect alert activation.

  • Root Cause: The failure was due to a delay in processing data in a back end system.
  • Incident Timeline: 55 minutes - 5/28, 15:10 UTC through 5/28, 16:05 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Jack