Final Update: Friday, 10 January 2020 19:32 UTC
We've confirmed that all systems are back to normal with no customer impact as of 1/10, 17:38 UTC. Our logs show the incident started on 1/10, 16:26 UTC and that during the approximately 1 hour that it took to resolve the issue customers would have experienced gaps in their data from this time frame as well as possible latency in data arrival.
-Jeff
We've confirmed that all systems are back to normal with no customer impact as of 1/10, 17:38 UTC. Our logs show the incident started on 1/10, 16:26 UTC and that during the approximately 1 hour that it took to resolve the issue customers would have experienced gaps in their data from this time frame as well as possible latency in data arrival.
- Root Cause: The failure was due to a backend service that became unhealthy. Traffic was re-routed around the unhealthy services.
- Incident Timeline: 1 Hours & 12 minutes - 1/10, 16:26 UTC through M/D, 17:38 UTC
-Jeff
Initial Update: Friday, 10 January 2020 17:40 UTC
We are aware of issues within Application Insights and are actively investigating. Some customers in Brazil South may experience Data ingestion Latency.
-Jeff
We are aware of issues within Application Insights and are actively investigating. Some customers in Brazil South may experience Data ingestion Latency.
- Next Update: Before 01/10 20:00 UTC
-Jeff
Updated Jan 10, 2020
Version 3.0Azure-Monitor-Team
Silver Contributor
Joined February 13, 2019
Azure Monitor Status Archive
Follow this blog board to get notified when there's new activity