%3CLINGO-SUB%20id%3D%22lingo-sub-1818826%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2010%2F26%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1818826%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Monday%2C%2026%20October%202020%2015%3A17%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F26%2C%2014%3A10%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F26%2C%2013%3A34%20UTC%20and%20that%20during%20the%2044%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%2C%20some%20customers%20may%20have%20experienced%20data%20access%20issues%20in%20azure%20portal%20and%20also%20issues%20with%20missed%20or%20delayed%20Log%20Search%20alerts%20in%20East%20US2%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2044%20minutes%20-%2010%2F26%2C%2013%3A34%20UTC%20through%2010%2F26%2C%2014%3A10%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Deepika%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Monday%2C%2026%20October%202020%2014%3A02%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20data%20access%20issues%20in%20azure%20portal%20and%20also%20issues%20with%20missed%20or%20delayed%20Log%20Search%20alerts%20in%20East%20US2%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F26%2016%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Vamshi%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1818826%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Monday, 26 October 2020 15:17 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/26, 14:10 UTC. Our logs show the incident started on 10/26, 13:34 UTC and that during the 44 minutes that it took to resolve the issue, some customers may have experienced data access issues in azure portal and also issues with missed or delayed Log Search alerts in East US2 region.
  • Root Cause: The failure was due to one of our dependent services.
  • Incident Timeline: 44 minutes - 10/26, 13:34 UTC through 10/26, 14:10 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Deepika

Initial Update: Monday, 26 October 2020 14:02 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience data access issues in azure portal and also issues with missed or delayed Log Search alerts in East US2 region.
  • Work Around: None
  • Next Update: Before 10/26 16:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Vamshi