Home
%3CLINGO-SUB%20id%3D%22lingo-sub-673445%22%20slang%3D%22en-US%22%3EExperiencing%20issues%20while%20creating%20metric%20alerts%20with%20dynamic%20thresholds%20-%2006%2F06%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-673445%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2006%20June%202019%2010%3A22%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2006%2F06%2C%2010%3A05%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2006%2F05%2C%2019%3A15%20UTC%20and%20that%20during%20the%2014%20hours%2050%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20may%20have%20experienced%20failures%20in%20dynamic%20threshold%20chart%20while%20creating%20dynamic%20alerts..%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20configuration%20changes%20in%20one%20of%20our%20dependent%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2014%20Hours%20%26amp%3B%2050%20minutes%20-%2006%2F05%2C%2019%3A15%20UTC%20through%2006%2F06%2C%2010%3A05%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Mohini%20Nikam%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Thursday%2C%2006%20June%202019%2008%3A39%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Azure%20Monitoring%20Service%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20failures%20while%20trying%20to%20view%20the%20charts%20with%20dynamic%20thresholds%20in%20the%20metric%20alerts%20blade.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2006%2F06%2011%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Varun%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-673445%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 06 June 2019 10:22 UTC

We've confirmed that all systems are back to normal with no customer impact as of 06/06, 10:05 UTC. Our logs show the incident started on 06/05, 19:15 UTC and that during the 14 hours 50 minutes that it took to resolve the issue some customers may have experienced failures in dynamic threshold chart while creating dynamic alerts..
  • Root Cause: The failure was due to configuration changes in one of our dependent service.
  • Incident Timeline: 14 Hours & 50 minutes - 06/05, 19:15 UTC through 06/06, 10:05 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Mohini Nikam

Initial Update: Thursday, 06 June 2019 08:39 UTC

We are aware of issues within Azure Monitoring Service and are actively investigating. Some customers may experience failures while trying to view the charts with dynamic thresholds in the metric alerts blade.
  • Work Around: None
  • Next Update: Before 06/06 11:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Varun