%3CLINGO-SUB%20id%3D%22lingo-sub-1424131%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2005%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1424131%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2028%20May%202020%2015%3A33%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2005%2F28%2C%2014%3A34%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2005%2F28%2C%2014%3A12%20UTC%20and%20that%20during%20the%2022%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20might%20have%20experienced%20Data%20Latency%2C%20Data%20Gaps%20and%20incorrect%20Alert%20Activation%20in%20East%20US%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20one%20of%20the%20our%20back%20end%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2022%20minutes%20-%2005%2F28%2C%2014%3A12%20UTC%20through%2005%2F28%2C%2014%3A34%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Satya%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1424131%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 28 May 2020 15:33 UTC

We've confirmed that all systems are back to normal with no customer impact as of 05/28, 14:34 UTC. Our logs show the incident started on 05/28, 14:12 UTC and that during the 22 minutes that it took to resolve the issue some of the customers might have experienced Data Latency, Data Gaps and incorrect Alert Activation in East US region.
  • Root Cause: The failure was due to one of the our back end services.
  • Incident Timeline: 22 minutes - 05/28, 14:12 UTC through 05/28, 14:34 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Satya