We've confirmed that all systems are back to normal with no customer impact as of 06/18, 18:20 UTC. Our logs show the incident started on 06/18, 16:05 UTC and that during the 2 hours and 15 minutes that it took to resolve the issue some customers ingesting telemetry in their Log Analytics Workspace in East US 2 geographical region might have experienced intermittent data latency and incorrect alert activation for Heartbeat, Perf, Security Event, and Common Security Log events.
Root Cause: The failure was due to issue in one of the backend services.
Incident Timeline: 2 Hours & 15 minutes - 06/18, 16:05 UTC through 06/18, 18:20 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.