We've confirmed that all systems are back to normal with no customer impact as of 05/19, 10:10 UTC. Our logs show the incident started on 05/18, 16:30 UTC and that during the 17 hours 40 minutes that it took to resolve the issue some customers may have experienced failures in dynamic threshold chart while creating dynamic alerts.
Root Cause: The failure was due to configuration changes in one of our dependent service.
Incident Timeline: 17 Hours & 40 minutes - 05/18, 16:30 UTC through 05/19, 10:10 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
Initial Update: Sunday, 19 May 2019 07:24 UTC
We are aware of issues within Azure Monitoring Service and are actively investigating. Some customers may experience failures in dynamic threshold chart while creating dynamic alerts .
Work Around: Ignore the alert while creating the alerts.
Next Update: Before 05/19 09:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Mohini Nikam