%3CLINGO-SUB%20id%3D%22lingo-sub-1432366%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20-%2006%2F02%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1432366%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2002%20June%202020%2001%3A45%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%206%2F2%2C%2001%3A41%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%206%2F2%2C%2000%3A00%20UTC%20and%20that%20during%20the%20one%20hour%2041%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%209%2C500%20customers%20experienced%20failed%20queries%20and%20delayed%20or%20missing%20alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20backend%20dependency%20that%20was%20in%20a%20bad%20state.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2041%20minutes%20-%206%2F2%2C%2000%3A00%20UTC%20through%206%2F2%2C%2001%3A41%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Tuesday%2C%2002%20June%202020%2001%3A22%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3ERoot%20cause%20has%20been%20isolated%20to%20a%20backend%20service%20impacting%20issue%20which%20was%20impacting%20a%20users%20ability%20to%20query%20and%20causing%20misfiring%20alerts%20in%20East%20US%2C%20East%20US%202%2C%20Southeast%20Australia%2C%20North%20EU%2C%20and%20East%20Japan.%20To%20address%20this%20issue%20we%20have%20reset%20the%20backend%20service.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2006%2F02%2003%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Tuesday%2C%2002%20June%202020%2000%3A39%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20in%20East%20Japan%20may%20experience%20failures%20when%20querying%20for%20data%20from%20their%20workspace.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2006%2F02%2003%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1432366%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EActivity%20Log%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EMetric%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ESmart%20Diagnostics%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 02 June 2020 01:45 UTC

We've confirmed that all systems are back to normal with no customer impact as of 6/2, 01:41 UTC. Our logs show the incident started on 6/2, 00:00 UTC and that during the one hour 41 minutes that it took to resolve the issue 9,500 customers experienced failed queries and delayed or missing alerts.
  • Root Cause: The failure was due to a backend dependency that was in a bad state.
  • Incident Timeline: 1 Hours & 41 minutes - 6/2, 00:00 UTC through 6/2, 01:41 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jeff

Update: Tuesday, 02 June 2020 01:22 UTC

Root cause has been isolated to a backend service impacting issue which was impacting a users ability to query and causing misfiring alerts in East US, East US 2, Southeast Australia, North EU, and East Japan. To address this issue we have reset the backend service.
  • Work Around: None
  • Next Update: Before 06/02 03:30 UTC
-Jeff

Initial Update: Tuesday, 02 June 2020 00:39 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers in East Japan may experience failures when querying for data from their workspace.
  • Work Around: None
  • Next Update: Before 06/02 03:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff