We've confirmed that all systems are back to normal with no customer impact as of 03/15, 00:21 UTC. Our logs show the incident started on 03/14 , 23:42 UTC and that during the 39 minutes that it took to resolve the issue some of customers experienced ingesting telemetry in East US 2 intermittent data latency, data gaps and incorrect alert activation.
Root Cause: The failure was due to one of the dependent services being unhealthy.
Incident Timeline: 0 Hours & 39 minutes - 03/14, 23:42 UTC through 03/15, 00:21 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.