We've confirmed that all systems are back to normal with no customer impact as of 06/08, 12:22 UTC. Our logs show the incident started on 06/08, 06:53 UTC and that during the 5 hours & 29 minutes that it took to resolve the issue 77 customers would have experienced failures in triggering alerts from the OMS portal.
Root Cause: The failure was due to recent changes in one of our backend service.
Incident Timeline: 5 Hours & 29 minutes - 06/08, 06:53 UTC through 06/08, 12:22 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
Update: Friday, 08 June 2018 10:04 UTC
We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers continue to experience missing alerts in OMS portal. We are working to establish the start time for the issue, initial findings indicate that the problem began at 06/08 06:53 UTC.