We've confirmed that all systems are back to normal with no customer impact as of 6/2, 15:27 UTC. Our logs show the incident started on 6/2, 15:04 UTC and that during the 23 minutes that it took to resolve the issue customers experienced may have experienced intermittent data latency, data gaps and incorrect alert activation in the West US region.
Root Cause: The failure was due to an instance of the ingestion service going into a bad state.
Incident Timeline: 23 minutes - 6/2, 15:04 UTC through 6/2, 15:27 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.