We've confirmed that all systems are back to normal as of 03/03, 18:59 UTC. Our logs show the incident started on 03/03, 15:42 UTC and that during the 3 hours 17minutes that it took to resolve the issue customers may have experienced ingestion delay, gaps in data, data access issues, delayed or missed or misfired log search alerts and availabilty tests failing in East US region.
Root Cause: The failure was due to Azure outage cuased by thermal issues which impacted storage and network.
Incident Timeline: 3 Hours & 17 minutes - 03/03, 15:42 UTC through 03/03, 18:59 UTC
We understand that customers rely on Azure Monitoring Services as a critical service and apologize for any impact this incident caused.
Initial Update: Tuesday, 03 March 2020 16:30 UTC
We are aware of issues within Azure Monitoring Services. Customers may experience ingestion delay, data access issues, delayed or missed or misfired log search alerts and availabilty tests failing in East US region. Initial investigation points to Azure outage due to impact on virtual machines in East US region.
Work Around: None
Next Update: Before 03/03 20:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.