Home
%3CLINGO-SUB%20id%3D%22lingo-sub-363194%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2003%2F08%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-363194%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2008%20March%202019%2010%3A22%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2003%2F08%2C%2009%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2003%2F01%2C%2021%3A00%20UTC%20and%20that%20during%20the%206%20days%20and%2012%20hours%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20in%20West%20US%20region%20may%20have%20experienced%20ingestion%20latency%20and%20missed%20results%20intermittently%20from%20Web%20tests%20that%20ran%20in%20Azure%20Portal.%3C%2FDIV%3E%3CDIV%20style%3D%22%22%3E%3CBR%20%2F%3E%3CUL%20style%3D%22%22%3E%3CLI%20style%3D%22font-size%3A%2014px%3B%22%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20the%20issue%20with%20one%20of%20our%20backend%20service.%3C%2FLI%3E%3CLI%20style%3D%22%22%3E%3CU%20style%3D%22font-size%3A%2014px%3B%22%3EIncident%20Timeline%3C%2FU%3E%3A%206%20Days%20%26amp%3B%2012%20Hours%20-%2003%2F01%2C%2021%3A00%20UTC%20through%2003%2F08%2C%2009%3A00%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Anmol%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-363194%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 08 March 2019 10:22 UTC

We've confirmed that all systems are back to normal with no customer impact as of 03/08, 09:00 UTC. Our logs show the incident started on 03/01, 21:00 UTC and that during the 6 days and 12 hours that it took to resolve the issue some customers in West US region may have experienced ingestion latency and missed results intermittently from Web tests that ran in Azure Portal.

  • Root Cause: The failure was due to the issue with one of our backend service.
  • Incident Timeline: 6 Days & 12 Hours - 03/01, 21:00 UTC through 03/08, 09:00 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Anmol