Final Update: Tuesday, 10 March 2020 08:45 UTC
We've confirmed that all systems are back to normal with no customer impact as of 03/10, 07:30 UTC. Our logs show the incident started on 03/09, 14:30 UTC and that during the 17 hours that it took to resolve the issue some of the customers in East US region experienced latency for their log ingestion as well as misfiring alerts.
- Root Cause: The failure was due to issue at storage side.
- Incident Timeline: 17 Hours - 03/09, 14:30 UTC through 03/10, 07:30 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.