Home
%3CLINGO-SUB%20id%3D%22lingo-sub-894473%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%20and%20Data%20Loss%20in%20Azure%20Portal%20in%20Central%20Korea%20-%2010%2F04%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-894473%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Saturday%2C%2005%20October%202019%2000%3A48%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F5%2C%2000%3A40%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F4%2C%2023%3A10%20UTC%20and%20that%20during%20the%201.5%20hours%20that%20it%20took%20to%20resolve%20the%20issue%20less%20than%202%25%20of%20customers%20experienced%20data%20loss%20or%20ingestion%20latency.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20dependent%20backend%20service%20degradation.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2030%20minutes%26nbsp%3B%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Saturday%2C%2005%20October%202019%2000%3A41%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20a%20dependent%20service%20degradation%20which%20was%20impacting%20ingestion%20in%20Korea%20Central%20region.%20To%20address%20this%20issue%20we%20temporarily%20redirected%20traffic%20to%20a%20functioning%20region.%20Once%20dependent%20service%20was%20fixed%2C%20traffic%20was%20redirected%20back.%20Application%20Insights%20is%20now%20working%20as%20expected%20in%20Korea%20Central.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F05%2002%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%3EInitial%20Update%3A%20Friday%2C%2004%20October%202019%2023%3A14%20UTC%3C%2FDIV%3E%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20Latency%20and%20Data%20Loss%20in%20Central%20Korea%20region.%3CBR%20%2F%3EWork%20Around%3A%20None%3CBR%20%2F%3ENext%20Update%3A%20Before%2010%2F05%2001%3A30%20UTC%20%3CBR%20%2F%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-894473%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Saturday, 05 October 2019 00:48 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/5, 00:40 UTC. Our logs show the incident started on 10/4, 23:10 UTC and that during the 1.5 hours that it took to resolve the issue less than 2% of customers experienced data loss or ingestion latency.
  • Root Cause: The failure was due to dependent backend service degradation.
  • Incident Timeline: 1 Hours & 30 minutes 
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Jeff

Update: Saturday, 05 October 2019 00:41 UTC

Root cause has been isolated to a dependent service degradation which was impacting ingestion in Korea Central region. To address this issue we temporarily redirected traffic to a functioning region. Once dependent service was fixed, traffic was redirected back. Application Insights is now working as expected in Korea Central.
  • Next Update: Before 10/05 02:00 UTC
-Jeff

Initial Update: Friday, 04 October 2019 23:14 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Latency and Data Loss in Central Korea region.
Work Around: None
Next Update: Before 10/05 01:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff