We've confirmed that all systems are back to normal with no customer impact as of 06/06, 10:05 UTC. Our logs show the incident started on 06/05, 19:15 UTC and that during the 14 hours 50 minutes that it took to resolve the issue some customers may have experienced failures in dynamic threshold chart while creating dynamic alerts..
Root Cause: The failure was due to configuration changes in one of our dependent service.
Incident Timeline: 14 Hours & 50 minutes - 06/05, 19:15 UTC through 06/06, 10:05 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
Initial Update: Thursday, 06 June 2019 08:39 UTC
We are aware of issues within Azure Monitoring Service and are actively investigating. Some customers may experience failures while trying to view the charts with dynamic thresholds in the metric alerts blade.
Work Around: None
Next Update: Before 06/06 11:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Varun