We've confirmed that all systems are back to normal with no customer impact as of 6/30, 10:05 PM UTC. Our logs show the incident started on 6/30,12 AM UTC and that during the ~ 22 hours that it took to resolve the issue customers in East US experienced Data latency issues while accessing their data in Log analytics.
Root Cause: The failure was due to recent configuration change to a backend service which exhausted available resources.
Incident Timeline: 22 hours, 6/30 12 AM UTC through 6/30,10:05 PM UTC
We understand that customers rely on Log analytics as a critical service and apologize for any impact this incident caused.
Update: Sunday, 30 June 2019 19:50 UTC
Root cause has been isolated to a recent change to a backend service which exhausted available resources. To address this issue engineers have reconfigured the backend instance and scaled out the resources. Azure diagnostic logs from Azure storage is now working as expected. Some customers may experience some latency and we estimate 6 hours before all latent data is addressed.
Next Update: Before 07/01 02:00 UTC
Initial Update: Sunday, 30 June 2019 17:21 UTC
We are aware of issues within Log Analytics and are actively investigating. Some customers of Log analytics in East US region may experience intermittent Data latency when accessing Azure Diagnostic logs from Azure Storage in Log Analytics workspace.
Work Around: None
Next Update: Before 06/30 19:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Anupama