%3CLINGO-SUB%20id%3D%22lingo-sub-959611%22%20slang%3D%22en-US%22%3EExperiencing%20Alerting%20failure%20for%20Log%20Analytics%20Metric%20Alerts%20-%2010%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-959611%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Monday%2C%2028%20October%202019%2023%3A44%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F28%2C%2023%3A40%20UTC.%26nbsp%3B%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20back%20end%20service%20which%20reached%20an%20operational%20threshold.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%2010%20Hours%20%26amp%3B%2010%20minutes%20-%2010%2F28%2C%2013%3A30%20UTC%20through%2010%2F28%2C%2023%3A40%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Matt%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Monday%2C%2028%20October%202019%2021%3A27%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EEngineers%20have%20scaled%20out%20services%20and%20this%20has%20partially%20mitigated%20the%20impact%3B%20the%20count%20of%20impacted%20subscriptions%20has%20been%20reduced.%26nbsp%3B%20However%2C%20some%20customers%20may%20continue%20to%20experience%20alerting%20failures%20from%20Log%20Analytics%20Metrics%20Alerts%20in%20the%20West%20Europe%20region.%26nbsp%3B%20We%20still%20do%20not%20have%20an%20estimate%20for%20when%20the%20system%20will%20be%20fully%20recovered.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F28%2023%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3E-Matt%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Monday%2C%2028%20October%202019%2018%3A51%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics%20Metric%20Alerts.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20continue%20to%20experience%20alerting%20failures%20from%20Log%20Analytics%20Metric%20Alerts%20in%20the%20West%20Europe%20region.%26nbsp%3B%20We%20currently%20have%20no%20estimate%20for%20resolution.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F28%2021%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Matt%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Monday%2C%2028%20October%202019%2015%3A58%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20Metric%20Alerts%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20Alerting%20failure%20from%20Log%20Analytics%20Metric%20Alerts%20in%20West%20Europe%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2010%2F28%2019%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Naresh%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-959611%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Monday, 28 October 2019 23:44 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/28, 23:40 UTC. 
  • Root Cause: The failure was due to a back end service which reached an operational threshold.
  • Incident Timeline: 10 Hours & 10 minutes - 10/28, 13:30 UTC through 10/28, 23:40 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Matt

Update: Monday, 28 October 2019 21:27 UTC

Engineers have scaled out services and this has partially mitigated the impact; the count of impacted subscriptions has been reduced.  However, some customers may continue to experience alerting failures from Log Analytics Metrics Alerts in the West Europe region.  We still do not have an estimate for when the system will be fully recovered.
  • Work Around: None
  • Next Update: Before 10/28 23:30 UTC
-Matt

Update: Monday, 28 October 2019 18:51 UTC

We continue to investigate issues within Log Analytics Metric Alerts. Root cause is not fully understood at this time. Some customers continue to experience alerting failures from Log Analytics Metric Alerts in the West Europe region.  We currently have no estimate for resolution.
  • Work Around: None
  • Next Update: Before 10/28 21:00 UTC
-Matt

Initial Update: Monday, 28 October 2019 15:58 UTC

We are aware of issues within Log Analytics Metric Alerts and are actively investigating. Some customers may experience Alerting failure from Log Analytics Metric Alerts in West Europe region.
  • Work Around: None
  • Next Update: Before 10/28 19:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Naresh