%3CLINGO-SUB%20id%3D%22lingo-sub-1229727%22%20slang%3D%22en-US%22%3EExperiencing%20Latency%2C%20Data%20Loss%20issue%2C%20and%20alerting%20issues%20in%20West%20Central%20US-03%2F15-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1229727%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Sunday%2C%2015%20March%202020%2022%3A43%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%203%2F15%2C%2022%3A25%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%203%2F15%2C%2020%3A14%20UTC%20and%20that%20during%20the%202%20hours%2011%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20customers%20experienced%20latent%20ingestion%20of%20data%20and%20log%20search%20alerts%20misfiring.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20utility%20problem%20at%20the%20West%20Central%20US%20data%20center.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2011%20minutes%20-%203%2F15%2C%2020%3A14%20UTC%20through%203%2F15%2C%2022%3A25%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20and%20Azure%20log%20analytics%20as%20critical%20services%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Sunday%2C%2015%20March%202020%2022%3A24%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20a%20utility%20issue%20in%20West%20Central%20US%20data%20center%20which%20was%20impacting%20communications.%20To%20address%20this%20issue%20Azure%20teams%20have%20resolved%20it.%20Some%20customers%20may%20experience%20data%20still%20being%20ingested%20with%20a%20small%20amount%20of%20latency%20and%20possible%20misfiring%20alerts%20until%20the%20ingestion%20catches%20up.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F16%2000%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Sunday%2C%2015%20March%202020%2021%3A40%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Application%20Insights%20and%20Azure%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20Latency%20and%20Data%20Loss%2C%20configuration%20failures%20for%20alerts%2C%20misfired%20alerts%2C%20and%20other%20unexpected%20behaviors.%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2003%2F16%2000%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1229727%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EService%20Map%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Sunday, 15 March 2020 22:43 UTC

We've confirmed that all systems are back to normal with no customer impact as of 3/15, 22:25 UTC. Our logs show the incident started on 3/15, 20:14 UTC and that during the 2 hours 11 minutes that it took to resolve the issue customers experienced latent ingestion of data and log search alerts misfiring.
  • Root Cause: The failure was due to a utility problem at the West Central US data center.
  • Incident Timeline: 2 Hours & 11 minutes - 3/15, 20:14 UTC through 3/15, 22:25 UTC
We understand that customers rely on Application Insights and Azure log analytics as critical services and apologize for any impact this incident caused.

-Jeff

Update: Sunday, 15 March 2020 22:24 UTC

Root cause has been isolated to a utility issue in West Central US data center which was impacting communications. To address this issue Azure teams have resolved it. Some customers may experience data still being ingested with a small amount of latency and possible misfiring alerts until the ingestion catches up.
  • Next Update: Before 03/16 00:30 UTC
-Jeff

Initial Update: Sunday, 15 March 2020 21:40 UTC

We are aware of issues within Application Insights and Azure Log Analytics and are actively investigating. Some customers may experience Latency and Data Loss, configuration failures for alerts, misfired alerts, and other unexpected behaviors.
  • Next Update: Before 03/16 00:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff