We've confirmed that all systems are back to normal with no customer impact as of 2/28, 22:12 UTC. Our logs show the incident started on 2/28, 19:08 UTC and that during the ~3 hours hours that it took to resolve the issue customers in Korea Central ingesting telemetry in their Log Analytics Workspace experienced intermittent data latency and incorrect alert activation.
Root Cause: The failure was due to one of the backend services becoming unhealthy.
Incident Timeline: 3 Hours - 2/28, 19:08 UTC through 2/28, 22:12 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.