We've confirmed that all systems are back to normal with no customer impact as of 4/19, 21:55 UTC. Our logs show the incident started on 4/19, 19:30 UTC and that at the peak, up to 8% of queries against workspaces in East US would have experienced failures.
Root Cause: The failure was due to a backend service entering an unhealthy state, this service was restarted.
Incident Timeline: 2 Hours & 25 minutes - 4/19, 19:30 UTC through 4/19, 22:15 UTC
We understand that customers rely on Azure Monitor services and apologize for any impact this incident caused.
Initial Update: Friday, 19 April 2019 21:03 UTC
We are aware of issues querying data through Log Analytics workspaces in East US starting around 19:30 UTC and are actively investigating. Approximately 8% of queries in this region are impacted.
Work Around: None.
Next Update: Before 04/19 23:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Matthew Cosner