We've confirmed that all systems are back to normal with no customer impact as of 05/02, 23:30 UTC. Our logs show the incident started on 05/02, 19:50 UTC and that during the 3 hours 40 that it took to resolve the issue some customer might experienced data access,alerting failures during the impacted window across multiple regions.
Root Cause: Root cause has been isolated to DNS issue that has impacted our upstream storage services
Incident Timeline: 3 Hours & 40 minutes - 05/02, 19:50 UTC through 05/02, 23:30 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
Update: Thursday, 02 May 2019 21:38 UTC
Root cause has been isolated to DNS issue that has impacted our upstream storage services. This issue was then impacting data access and ingestion for Azure monitor services. To address this issue networking team has applied mitigation and most of our services are on recovery path. Data access is working as expected however some customer may continue to see data latency until issue is fully mitigated.
Next Update: Before 05/03 01:00 UTC
Initial Update: Thursday, 02 May 2019 20:35 UTC
We are aware of issues within Azure Monitor services and are actively investigating. Customers may experience data Access issues across multiple locations. We provide more updates as we learn.
Next Update: Before 05/02 23:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Arvind