Final Update: Thursday, 12 December 2019 13:10 UTC
We've confirmed that all systems are back to normal with no customer impact as of 12/12, 12:52 UTC. Our logs show the incident started on 12/12, 09:10 UTC and that during the 3 Hours & 42 minutes that it took to resolve the issue some of the customers may have experienced higher than expected latency or failures regarding metric alerts in East US region.
-Anusha
We've confirmed that all systems are back to normal with no customer impact as of 12/12, 12:52 UTC. Our logs show the incident started on 12/12, 09:10 UTC and that during the 3 Hours & 42 minutes that it took to resolve the issue some of the customers may have experienced higher than expected latency or failures regarding metric alerts in East US region.
- Root Cause: The failure was due to bad deployment in one of our backend service.
- Incident Timeline: 3 Hours & 42 minutes - 12/12, 09:10 UTC through 12/12, 12:52 UTC
-Anusha
Update: Thursday, 12 December 2019 10:32 UTC
We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers may have experienced higher than expected latency or failures regarding metric alerts in East US region . Initial findings indicate that the problem began at 12/12 09:10 UTC. We currently have no estimate for resolution.
We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers may have experienced higher than expected latency or failures regarding metric alerts in East US region . Initial findings indicate that the problem began at 12/12 09:10 UTC. We currently have no estimate for resolution.
- Work Around: None
- Next Update: Before 12/12 16:00 UTC
Updated Dec 12, 2019
Version 2.0Azure-Monitor-Team
Former Employee
Joined February 13, 2019
Azure Monitor Status Archive
Follow this blog board to get notified when there's new activity