Final Update: Wednesday, 23 January 2019 12:59 UTC
We've confirmed that all systems are back to normal with no customer impact as of 01/23, 12:50 UTC. Our logs show the incident started on 01/23, 08:00 UTC and that during the 4 hours & 50 minutes that it took to resolve the issue some customers from East US and Japan East may have experienced log processing delays with resources hosted in these regions.
Root Cause: The failure was due to change in the recent deployment in one of our backend service.
Incident Timeline: 4 Hours & 50 minutes - 01/23, 08:00 UTC through 01/23, 12:50 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
Initial Update: Wednesday, 23 January 2019 11:43 UTC
We are aware of issues within Log Analytics and are actively investigating. Some customers in EUS and EJP may experience higher data ingestion latency into their workspaces.
Work Around: None
Next Update: Before 01/23 14:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.