Final Update: Wednesday, 18 December 2019 17:26 UTC
We've confirmed that all systems are back to normal with no customer impact as of 12/18, 16:40 UTC. Our logs show the incident started on 12/17, 17:00 UTC and that during the 23 hours 40 mins that it took to resolve the issue 90% of customers in South Central US may have received failure notifications when performing service management operations - such as create, update, delete and read for classic metric alerts hosted in this region.
Root Cause: Engineers determined that a recent configuration change caused a backend service in charge of processing service requests to become unhealthy, preventing requests from completing..
Mitigation: Engineers performed a change to the service configuration to mitigate the issue.
Incident Timeline: 23 Hours & 40 minutes - 12/17, 17:00 UTC through 12/18, 16:40 UTC
We understand that customers rely on Azure Monitor as a critical service and apologize for any impact this incident caused.
Initial Update: Wednesday, 18 December 2019 16:05 UTC
We are aware of issues within Classic Alerts and are actively investigating. Some customers in South Central US may experience failure in updation or creation of new alerts.
Work Around: None
Next Update: Before 12/18 18:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Madhuri