%3CLINGO-SUB%20id%3D%22lingo-sub-1419270%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2005%2F27%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1419270%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22%22%3E%3CU%20style%3D%22font-size%3A%2014px%3B%22%3EFinal%20Update%3C%2FU%3E%3A%20Wednesday%2C%2027%20May%202020%2007%3A36%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3E%3CDIV%20style%3D%22%22%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2005%2F27%2C%2007%3A05%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2005%2F26%2C%2021%3A50%20UTC%20and%20that%20during%20the%209%20hours%2015%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20might%20experienced%20Data%20Latency%2C%20Data%20Gaps%20and%20incorrect%20Alert%20Activation%20in%20East%20US%20region.%3C%2FDIV%3E%3CDIV%20style%3D%22%22%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CUL%20style%3D%22font-size%3A%2014px%3B%22%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20issue%20in%20one%20of%20our%20back%20end%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%209%20Hours%20%26amp%3B%2015%20minutes%20-%2005%2F26%2C%2021%3A50%20UTC%20through%2005%2F27%2C%2007%3A05%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Satya%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Wednesday%2C%2027%20May%202020%2004%3A25%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Application%20Insights.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20continue%20to%20may%20experience%20Data%20Latency%2C%20Data%20Gaps%20and%20incorrect%20Alert%20Activation%20in%20East%20US%20region.%20We%20are%20working%20to%20establish%20the%20start%20time%20for%20the%20issue%2C%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2005%2F26%2021%3A50%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2005%2F27%2008%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Satya%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Wednesday%2C%2027%20May%202020%2001%3A28%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20are%20actively%20investigating.%20Some%20customers%20using%20Application%20Insights%20resources%20in%20East%20US%20region%20may%20experience%20Data%20Latency%2C%20Data%20Gaps%20and%20incorrect%20Alert%20Activation.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2005%2F27%2004%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E%3CBR%20%2F%3E-Sindhu%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1419270%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 27 May 2020 07:36 UTC

We've confirmed that all systems are back to normal with no customer impact as of 05/27, 07:05 UTC. Our logs show the incident started on 05/26, 21:50 UTC and that during the 9 hours 15 minutes that it took to resolve the issue some of the customers might experienced Data Latency, Data Gaps and incorrect Alert Activation in East US region.

  • Root Cause: The failure was due to issue in one of our back end services.
  • Incident Timeline: 9 Hours & 15 minutes - 05/26, 21:50 UTC through 05/27, 07:05 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Satya

Update: Wednesday, 27 May 2020 04:25 UTC

We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers continue to may experience Data Latency, Data Gaps and incorrect Alert Activation in East US region. We are working to establish the start time for the issue, initial findings indicate that the problem began at 05/26 21:50 UTC. We currently have no estimate for resolution.
  • Work Around: None
  • Next Update: Before 05/27 08:30 UTC
-Satya

Initial Update: Wednesday, 27 May 2020 01:28 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers using Application Insights resources in East US region may experience Data Latency, Data Gaps and incorrect Alert Activation.
  • Work Around: None
  • Next Update: Before 05/27 04:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Sindhu