Final Update: Friday, 25 October 2019 16:43 UTC
We've confirmed that all systems are back to normal with no customer impact as of 10/25, 16:42 UTC. Our logs show the incident started on 10/25, 12:45 UTC and that during the 3 hours, 57 minutes that it took to resolve the issue some customers experienced latency in their alerts in the East US region.
-Jack Cantwell
We've confirmed that all systems are back to normal with no customer impact as of 10/25, 16:42 UTC. Our logs show the incident started on 10/25, 12:45 UTC and that during the 3 hours, 57 minutes that it took to resolve the issue some customers experienced latency in their alerts in the East US region.
- Root Cause: The failure was due to a spike in alerts created a backlog. Then a backend system experienced a failure during scale out.
- Incident Timeline: 3 Hours & 57 minutes - 10/25, 12:45 UTC through 10/25, 16:42 UTC
-Jack Cantwell
Initial Update: Friday, 25 October 2019 13:24 UTC
We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience alerting delay or failure for Log search alerts in East US region.
-Rama
We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience alerting delay or failure for Log search alerts in East US region.
- Work Around: None
- Next Update: Before 10/25 15:30 UTC
-Rama
Updated Oct 25, 2019
Version 4.0Azure-Monitor-Team
Silver Contributor
Joined February 13, 2019
Azure Monitor Status Archive
Follow this blog board to get notified when there's new activity