Final Update: Wednesday, 18 March 2020 06:02 UTC
We've confirmed that all systems are back to normal with no customer impact as of 03/17, 23:10 UTC. Our logs show the incident started on 03/17, 17:46 UTC and that during the 5 hours and 24 minutes that it took to resolve the issue some in East US region may have experienced intermittent data latency, data gaps and incorrect alert activation.
- Root Cause: The failure was due to an issue in one of our dependent services.
- Incident Timeline: 5 Hours & 24 minutes - 3/17, 17:46 UTC through 3/17, 23:10 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
-Madhuri
Update: Wednesday, 18 March 2020 01:43 UTC
We continue to investigate issues within Application Insights. Engineers of the backend dependency are continuing their investigation to find the root cause.
- Next Update: Before 03/18 14:00 UTC
-Eric Singleton
Update: Tuesday, 17 March 2020 21:43 UTC
We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. The issue has been confirmed to be related to a backend dependency.
- Next Update: Before 03/18 02:00 UTC
-Eric Singleton
Initial Update: Tuesday, 17 March 2020 19:36 UTC
We are aware of issues within Application Insights and are actively investigating. Customers ingestion telemetry in East US geographical region during 17:00 UTC and 18:10 UTC may have experienced intermittent data latency, data gaps and incorrect alert activation.
- Work Around: None
- Next Update: Before 03/17 22:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Eric Singleton