We've confirmed that all systems are back to normal with no customer impact as of 10/28, 14:50 UTC. Our logs show the incident started on 10/28, 12:15 UTC and that during the 2 hours & 35 minutes that it took to resolve the issue some of customers experienced Data Latency causing alerting failure in East US region.
Root Cause: The failure was due to issue with our backend service.
Incident Timeline: 2 Hours & 35 minutes - 10/28, 12:15 UTC through 10/28, 14:50 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.
Initial Update: Monday, 28 October 2019 13:52 UTC
We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience Data latency.
Work Around: None
Next Update: Before 10/28 17:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Naresh