We've confirmed that all systems are back to normal with no customer impact as of 12/21, 22:57 UTC. Our logs show the incident started on 12/21 18:01 UTC and that during the entire duration that it took to resolve the issue customers from Log Analytics, Log Search alerts and/or Azure Sentinel experienced data latency, delayed or missing alerts and incorrect metrics when trying to access the data from region West Central US.
Root Cause: The failure was due to dependent downstream service which was degraded due to a configuration change.
Mitigation Step: The owning service applied a configuration change and issues started mitigating.
Incident Timeline: 12/21, 18:01 UTC through 12/21, 22:57 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
Update: Tuesday, 21 December 2021 21:15 UTC
We continue to investigate issues you are identified as customer using Log Analytics and/or Azure Sentinel in West Central US. Root cause is identified to be some kind of cluster configuration deployment causing the cluster in that region to go down. Some customers continue to experience data latency when trying to access data. We are working to establish the start time for the issue, initial findings indicate that the problem began at 12/21 ~06:34 PM UTC. We currently have no estimate for resolution.