We've confirmed that all systems are back to normal with no customer impact as of 08/04, 02:42 UTC. Our logs show the incident started on 08/04, 00:35 UTC and that during the 2 hours and 7 minutes that it took to resolve the issue some of the customers might have experienced delayed alerts. Alerts would have eventually fired.
Root Cause: The failure was due to an issue in one of our back-end services.
Incident Timeline: 2 Hours & 7 minutes - 08/04, 00:35 UTC through 08/04, 02:42 UTC
We understand that customers rely on Azure Monitor as a critical service and apologize for any impact this incident caused.