We've confirmed that all systems are back to normal with no customer impact as of 1/07, 18:47 UTC. Our logs show the incident started on 1/07, 17:38 UTC and that during the 51 minutes that it took to resolve the issue customers using Azure Monitor may have experienced delayed or missed Alerts
Root Cause: The failure was due to unhealthy clusters
Incident Timeline: 51 minutes - 1/07, 17:38 UTC through 1/07, 18:47 UTC
We understand that customers rely on Azure Monitor as a critical service and apologize for any impact this incident caused.