%3CLINGO-SUB%20id%3D%22lingo-sub-1255838%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2003%2F26%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1255838%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2026%20March%202020%2022%3A47%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EIssue%20started%20happening%20again%20after%20about%20an%20hour.%20We%20can%20now%20confirm%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2003%2F26%2C%2020%3A00%20UTC.%20Our%20logs%20show%20that%20the%20incident%20started%20on%2003%2F26%2C%2012%3A30%20UTC%20and%20during%20the%207%20hours%20and%2030%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20using%20Application%20Insights%20in%20East%20US%20region%20may%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%207%20Hours%2030%20minutes%20-%2003%2F26%2C%2012%3A30%20UTC%20through%2003%2F26%2C%2020%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jayadev%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22margin-bottom%3A%200.0001pt%3B%20line-height%3A%20normal%3B%20background-image%3A%20initial%3B%20background-position%3A%20initial%3B%20background-size%3A%20initial%3B%20background-repeat%3A%20initial%3B%20background-attachment%3A%20initial%3B%20background-origin%3A%20initial%3B%20background-clip%3A%20initial%3B%22%3E%3C%2FP%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Thursday%2C%2026%20March%202020%2014%3A38%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2003%2F26%2C%2014%3A20%20UTC.%20Our%20logs%20show%20that%20the%20incident%20started%20on%2003%2F26%2C%2012%3A30%20UTC%20and%20that%20during%20the%201%20hours%20and%2050%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20in%20East%20US%20region%20may%20have%20experienced%20intermittent%20data%20latency%2C%20data%20gaps%20and%20incorrect%20alert%20activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%26nbsp%3Ban%20issue%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20hour%2050%20minutes%20-%2003%2F26%2C%2012%3A30%20UTC%20through%2003%2F26%2C%2014%3A20%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Anmol%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1255838%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 26 March 2020 22:47 UTC

Issue started happening again after about an hour. We can now confirm that all systems are back to normal with no customer impact as of 03/26, 20:00 UTC. Our logs show that the incident started on 03/26, 12:30 UTC and during the 7 hours and 30 minutes that it took to resolve the issue some customers using Application Insights in East US region may have experienced intermittent data latency, data gaps and incorrect alert activation.
  • Root Cause: The failure was due to an issue in one of our dependent services.
  • Incident Timeline: 7 Hours 30 minutes - 03/26, 12:30 UTC through 03/26, 20:00 UTC
We apologize for any inconvenience.

-Jayadev

Update: Thursday, 26 March 2020 14:38 UTC

We've confirmed that all systems are back to normal with no customer impact as of 03/26, 14:20 UTC. Our logs show that the incident started on 03/26, 12:30 UTC and that during the 1 hours and 50 minutes that it took to resolve the issue some customers in East US region may have experienced intermittent data latency, data gaps and incorrect alert activation.
  • Root Cause: The failure was due to an issue in one of our dependent services.
  • Incident Timeline: 1 hour 50 minutes - 03/26, 12:30 UTC through 03/26, 14:20 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Anmol