We've confirmed that all systems are back to normal with no customer impact as of 02/05, 21:16 UTC. Our logs show that the incident started on 02/05, 20:51 UTC and that during the 25 minutes that it took to resolve the issue some customers ingesting telemetry in West US 2 region might have experienced intermittent data latency, data gaps and incorrect alert activation.
Root Cause: The failure was due to issues with one of the backend services.
Incident Timeline: 20 minutes - 02/05, 20:51 UTC through 02/05, 21:16 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.