Experiencing Data Latency issue in Azure Portal for EastUS2 region - 03/15 - Resolved

Published 04-06-2021 11:05 AM 388 Views
Final Update: Monday, 15 March 2021 02:43 UTC

We've confirmed that all systems are back to normal with no customer impact as of 3/15, 02:15 UTC. Our logs show the incident started on 3/15, 00:00 UTC and that during the two hours and 15 minutes that it took to resolve the issue approximately 10,000 customers experienced delayed telemetry ingestion.
  • Root Cause: The failure was due to a failure in a back end service that Application Insights relies on.
  • Incident Timeline: 2 Hours & 15 minutes - 3/15, 00:00 UTC through 3/15, 02:15 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Jack

Initial Update: Monday, 15 March 2021 01:15 UTC

We are aware of issues within Application Insights in the East US region and are actively investigating. Some customers may experience data ingestion latency.
  • Next Update: Before 03/15 02:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jack

%3CLINGO-SUB%20id%3D%22lingo-sub-2209951%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20for%20EastUS2%20region%20-%2003%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2209951%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Monday%2C%2015%20March%202021%2002%3A43%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%203%2F15%2C%2002%3A15%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%203%2F15%2C%2000%3A00%20UTC%20and%20that%20during%20the%20two%20hours%20and%2015%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20approximately%2010%2C000%20customers%20experienced%20delayed%20telemetry%20ingestion.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20failure%20in%20a%20back%20end%20service%20that%20Application%20Insights%20relies%20on.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2015%20minutes%20-%203%2F15%2C%2000%3A00%20UTC%20through%203%2F15%2C%2002%3A15%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jack%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Monday%2C%2015%20March%202021%2001%3A15%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20in%20the%20East%20US%20region%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20data%20ingestion%20latency.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F15%2002%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jack%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2209951%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Version history
Last update:
‎Mar 14 2021 07:55 PM
Updated by: