We've confirmed that all systems are back to normal with no customer impact as of 01/25, 11:11 UTC. Our logs show the incident started on 01/25, 09:00 UTC and that during the 2 hours and 11 minutesthat it took to resolve the issue some of customers may experienced intermittent data latency and incorrect alert activation in East US2 region
Root Cause: The failure was due to unhealthy backend service.
Incident Timeline: 2 Hours & 11 minutes - 01/25, 09:00 UTC through 01/25, 11:11 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.