We've confirmed that all systems are back to normal with no customer impact as of 4/16, 17:40 UTC. Our logs show the incident started on 4/16, 11:14 UTC and that during ~6 hours 20 min that it took to resolve the issue some customers experienced alert rule management failures for metric alerts on redis cache metrics.
Root Cause: The failure was due to one of the backend services using incorrect configuration.
Incident Timeline: 6 Hours & 20 minutes - 4/16, 11:14 UTC through 4/16, 17:40 UTC
We understand that customers rely on Metric Alerts as a critical service and apologize for any impact this incident caused.