We've confirmed that all systems are back to normal with no customer impact as of 10/14, 13:52 UTC. Our logs show the incident started on 10/14, 13:00 UTC and that during the 52 minutes that it took to resolve the issue some customers may have experienced 500 errors and high latency when querying App services.
Root Cause: The failure was due to a backend service becoming unhealthy.
Incident Timeline: 52 minutes - 10/14, 13:00 UTC through 10/14, 13:52 UTC
We understand that customers rely on Azure Monitor Essentials as a critical service and apologize for any impact this incident caused.