We've confirmed that all systems are back to normal with no customer impact as of 06/26, 18:15 UTC. Our logs show the incident started on 06/26, 14:00 UTC and that during the 4 hours and 15 minutes that it took to resolve the issue. Some customers using Application Insights components in West US2 who may have experienced intermittent data latency and incorrect alert activation.
Root Cause: We determined that the backend downstream service degraded to a bad state resulting the operational threshold to exceed which caused a data latency.
Mitigation: We restarted the service which returned the backend downstream service to a healthy state.
Incident Timeline: 4 Hours & 15 minutes - 06/26, 14:00 UTC through 06/26, 18:15 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.