We've confirmed that all systems are back to normal with no customer impact as of 01/21, 10:25 UTC. Our logs show the incident started on 01/21, 08:45 UTC and that during the 1hours 40 minutes that it took to resolve the issue some customers may have experienced missed alerts across regions.
Root Cause: We determined that a backend service responsible for processing alerts became unhealthy after a recent configuration change following a deployment.
Incident Timeline: 1 Hours & 40 minutes - 01/21, 08:45 UTC through 01/21, 10:25 UTC
We understand that customers rely on Azure Alerts as a critical service and apologize for any impact this incident caused.