We've confirmed that all systems are back to normal with no customer impact as of 10/14, 09:02 UTC. Our logs show the incident started on 10/14, 08:16 UTC and that during the 46 minutes that it took to resolve the issue some customers may have experienced 500 errors and high latency when querying App services.
Root Cause: The failure was due to a backend service becoming unhealthy.
Incident Timeline: 46 minutes - 10/14, 08:16 UTC through 10/14, 09:02 UTC
We understand that customers rely on Azure Monitor Essentials as a critical service and apologize for any impact this incident caused.