Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1106626%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2001%2F14%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1106626%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2014%20January%202020%2005%3A41%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2001%2F14%2C%2004%3A45%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2001%2F14%2C%2004%3A10%20UTC%20and%20that%20during%20the%2035%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20in%20West%20Central%20US%20experienced%20data%20access%20issue%20in%20Azure%20portal%20and%20query%20failure%20through%20API%20for%20Log%20Analytics%20and%20Alerting%20Latency%20and%20Alerting%20Failure%20for%20Log%20Search%20Alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20issue%20with%20one%20of%20our%20dependent%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%26nbsp%3B%2035%20minutes%20-%2001%2F14%2C%2004%3A10%20UTC%20through%2004%2F14%2C%2004%3A45%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20and%20Log%20Search%20Alerts%20as%20critical%20services%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Monish%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1106626%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 14 January 2020 05:41 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/14, 04:45 UTC. Our logs show the incident started on 01/14, 04:10 UTC and that during the 35 minutes that it took to resolve the issue some of the customers in West Central US experienced data access issue in Azure portal and query failure through API for Log Analytics and Alerting Latency and Alerting Failure for Log Search Alerts.
  • Root Cause: The failure was due to an issue with one of our dependent service.
  • Incident Timeline:  35 minutes - 01/14, 04:10 UTC through 04/14, 04:45 UTC
We understand that customers rely on Azure Log Analytics and Log Search Alerts as critical services and apologize for any impact this incident caused.

-Monish