We've confirmed that all systems are back to normal with no customer impact as of 10/12, 03:30 UTC. Our logs show the incident started on 10/11, 23:30 UTC and that during the 4 hours that it took to resolve the issue some of the customers may have experienced intermittent data latency and incorrect alert activation in West US 2 region while ingesting telemetry in their Log analytics workspace.
Root Cause: The failure was due to a dependent backend component failure.
Incident Timeline: 4 Hours - 10/11, 23:30 UTC through 10/12, 03:30 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.