We've confirmed that all systems are back to normal with no customer impact as of 10/29, 17:02 UTC. Our logs show the incident started on 10/29, 16:45 UTC and that duration of 17 minutes that it took to resolve the issue 100% of customers experienced intermittent data latency, data gaps and incorrect alert activation.
Root Cause: We determined that instances of an Application Insight backend layer that handles write requests became unhealthy, delaying or preventing requests from completing. We are continuing to investigate the cause of this issue with the backend services.
Incident Timeline: 17 minutes - 10/29, 16:45 UTC through 10/29, 17:02 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
Initial Update: Thursday, 29 October 2020 17:06 UTC
We are aware of issues within Application Insights as of 2020-10-29 16:45 UTC and are actively investigating. Application Insights customers ingesting telemetry in East US 2 may experience intermittent data latency, data gaps and incorrect alert activation.
Work Around: NA
Next Update: Before 10/29 19:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Arish B