Home
%3CLINGO-SUB%20id%3D%22lingo-sub-855712%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2009%2F14%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-855712%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Saturday%2C%2014%20September%202019%2016%3A30%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2009%2F14%2C%2015%3A40%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2009%2F14%2C%2014%3A45%20UTC%20and%20that%20during%20the%2055%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%209%25%20of%20customers%20experienced%20data%20access%20issue%20in%20azure%20and%20log%20analytics%20portal%20and%20would%20also%20have%20experienced%20alerting%20failure%20in%20East%20US%20region.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20issue%20in%20one%20of%20our%20backend%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2055%20minutes%20-%2009%2F14%2C%2014%3A45%20UTC%20through%2009%2F14%2C%2015%3A40%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Rama%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Saturday%2C%2014%20September%202019%2016%3A01%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20issue%20with%20data%20access%20in%20azure%20portal%20and%20alerting%20failures%20in%20East%20US%20region.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F14%2018%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Rama%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-855712%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Saturday, 14 September 2019 16:30 UTC

We've confirmed that all systems are back to normal with no customer impact as of 09/14, 15:40 UTC. Our logs show the incident started on 09/14, 14:45 UTC and that during the 55 minutes that it took to resolve the issue 9% of customers experienced data access issue in azure and log analytics portal and would also have experienced alerting failure in East US region.
  • Root Cause: The failure was due to issue in one of our backend services.
  • Incident Timeline: 55 minutes - 09/14, 14:45 UTC through 09/14, 15:40 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Rama

Initial Update: Saturday, 14 September 2019 16:01 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience issue with data access in azure portal and alerting failures in East US region.
  • Work Around: None
  • Next Update: Before 09/14 18:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Rama