We've confirmed that all systems are back to normal with no customer impact as of 03/28,16:30 UTC. Our telemetry shows the incident started on 03/28,13:15 PM UTC and that during the 3 hours 15 min that it took to resolve the issue, all customers using classic alerts under Application Insights would not have experienced alerts state change. This would have resulted alerts not firing for unhealthy alerts or healthy/unhealthy alerts would not have resolved.
Root Cause: The failure was due to incorrect value in configuration of one of the dependent services in alerting pipeline. We are working internally to investigate final root cause of the issue.
Incident Timeline: 3 Hours & 15 minutes - 03/28,13:15 PM UTC through 03/28,16:30 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.