Final Update: Sunday, 26 June 2022 19:44 UTC
We've confirmed that all systems are back to normal with no customer impact as of 06/26, 18:15 UTC. Our logs show the incident started on 06/26, 14:00 UTC and that during the 4 hours and 15 minutes that it took to resolve the issue. Some customers using Application Insights components in West US2 who may have experienced intermittent data latency and incorrect alert activation.
We've confirmed that all systems are back to normal with no customer impact as of 06/26, 18:15 UTC. Our logs show the incident started on 06/26, 14:00 UTC and that during the 4 hours and 15 minutes that it took to resolve the issue. Some customers using Application Insights components in West US2 who may have experienced intermittent data latency and incorrect alert activation.
- Root Cause: We determined that the backend downstream service degraded to a bad state resulting the operational threshold to exceed which caused a data latency.
- Mitigation: We restarted the service which returned the backend downstream service to a healthy state.
- Incident Timeline: 4 Hours & 15 minutes - 06/26, 14:00 UTC through 06/26, 18:15 UTC
-Vignesh
Updated Jun 26, 2022
Version 2.0Azure-Monitor-Team
Microsoft
Joined February 13, 2019
Azure Monitor Status Archive
Follow this blog board to get notified when there's new activity