We've confirmed that all systems are back to normal as of 01/02, 00:25 UTC. Our logs show the incident started on 01/01, 21:45 UTC and that during the 2 hours and 40 mins that it took to resolve the issue, some customers experienced data access and alerting failures.
Root Cause: The failure was due to memory issue in underlying infrastructure.
Incident Timeline: 2 Hours & 40 minutes - 01/01, 21:45 UTC through 01/02, 00:25 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.
Initial Update: Wednesday, 01 January 2020 22:54 UTC
We are aware of issues within Azure Monitor services and are actively investigating. Some customers may experience query and alerting failures against the data stored in West Europe and East US regions.
Work Around: Retry your operation as needed.
Next Update: Before 01/02 01:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Chandar