Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1092993%22%20slang%3D%22en-US%22%3EExperiencing%20Alerting%20failure%20for%20Log%20Search%20Alerts%20-%2001%2F06%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1092993%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Monday%2C%2006%20January%202020%2020%3A42%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%201%2F6%2C%2019%3A45%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%201%2F6%2C%2018%3A30%20UTC%20and%20that%20during%20the%201%20hour%2015%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%206.4%25%20of%20customers%20experienced%20data%20access%20in%20the%20Azure%20portal%2C%20query%20failures%20for%20Log%20Analytics%20and%20alerting%20failures%20for%20Log%20Search%20Alerts.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20an%20unhealthy%20backend%20service%20response.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2015%20minutes%20-%201%2F6%2C%2018%3A30%20UTC%20through%201%2F6%2C%2019%3A45%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jeff%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Monday%2C%2006%20January%202020%2019%3A44%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20in%20the%20West%20EU%20region%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20data%20access%20in%20the%20Azure%20portal%2C%20query%20failures%20for%20Log%20Analytics%20and%20alerting%20failures%20for%20Log%20Search%20Alerts..%3CUL%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2001%2F06%2021%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jeff%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1092993%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Monday, 06 January 2020 20:42 UTC

We've confirmed that all systems are back to normal with no customer impact as of 1/6, 19:45 UTC. Our logs show the incident started on 1/6, 18:30 UTC and that during the 1 hour 15 minutes that it took to resolve the issue 6.4% of customers experienced data access in the Azure portal, query failures for Log Analytics and alerting failures for Log Search Alerts.
  • Root Cause: The failure was due to an unhealthy backend service response.
  • Incident Timeline: 1 Hours & 15 minutes - 1/6, 18:30 UTC through 1/6, 19:45 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Jeff

Initial Update: Monday, 06 January 2020 19:44 UTC

We are aware of issues within Log Search Alerts in the West EU region and are actively investigating. Some customers may experience data access in the Azure portal, query failures for Log Analytics and alerting failures for Log Search Alerts..
  • Next Update: Before 01/06 21:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jeff