Final Update: Friday, 02 October 2020 19:45 UTC
We've confirmed that all systems are back to normal with no customer impact as of 10/02, 19:05 UTC. Our logs show the incident started on 10/02, 17:00 UTC and that during the 2 hours 5 mins that it took to resolve the issue 10% of customers experienced periods of increased latency when ingesting Log Analytics data in East US.
We've confirmed that all systems are back to normal with no customer impact as of 10/02, 19:05 UTC. Our logs show the incident started on 10/02, 17:00 UTC and that during the 2 hours 5 mins that it took to resolve the issue 10% of customers experienced periods of increased latency when ingesting Log Analytics data in East US.
.
-Vincent
- Root Cause: The failure was due to back-end web role had experienced a period of high CPU utilization due to an ongoing upgrade, causing some calls from the front-end ingestion service to time out and leading to the drop in front-end availability.
- Incident Timeline: 2 Hours & 05 minutes - 10/02, 17:00 UTC through 10/02, 19:05 UTC
-Vincent
Published Oct 02, 2020
Version 1.0Azure-Monitor-Team
Silver Contributor
Joined February 13, 2019
Azure Monitor Status Archive
Follow this blog board to get notified when there's new activity