We've confirmed that all systems are back to normal with no customer impact as of 09/29, 15:28 UTC. Our logs show the incident started on 09/29, 11:00 UTC and that during the 4 hours & 28 minutes that it took to resolve the issue some of customers experienced Alerting latency.
Root Cause: The failure was due to latency issue in backend service.
Incident Timeline: 4 Hours & 28 minutes - 09/29, 11:00 UTC through 09/29, 15:28 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.
Initial Update: Sunday, 29 September 2019 13:02 UTC
We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience Alerting latency in East US region.
Work Around: None
Next Update: Before 09/29 16:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Naresh