Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1011328%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issues%20for%20Log%20Analytics%20in%20West%20Europe%20-%2011%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1011328%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2015%20November%202019%2016%3A49%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2011%2F15%2C%2015%3A56%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2011%2F15%2C%2014%3A20%20UTC%20and%20that%20during%201%20hour%2036%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20subset%20of%20customers%20in%20West%20Europe%20might%20have%20experienced%20data%20access%20issues%20while%20accessing%20data%20for%20Log%20Analytics%20and%20latency%20in%20Log%20Search%20Alerts.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20issues%20in%20one%20of%20our%20dependent%20services.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2036%20minutes%20-%2011%2F15%2C%2014%3A20%26nbsp%3BUTC%20through%2011%2F15%2C%2015%3A56%26nbsp%3BUTC%3CBR%20%2F%3E%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Venkat%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2015%20November%202019%2015%3A53%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Search%20Alerts%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20Alerting%20failure%20in%20West%20Europe%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2011%2F15%2020%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Madhuri%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1011328%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 15 November 2019 16:49 UTC

We've confirmed that all systems are back to normal with no customer impact as of 11/15, 15:56 UTC. Our logs show the incident started on 11/15, 14:20 UTC and that during 1 hour 36 minutes that it took to resolve the issue subset of customers in West Europe might have experienced data access issues while accessing data for Log Analytics and latency in Log Search Alerts.
  • Root Cause: The failure was due to issues in one of our dependent services.
  • Incident Timeline: 1 Hours & 36 minutes - 11/15, 14:20 UTC through 11/15, 15:56 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Venkat

Initial Update: Friday, 15 November 2019 15:53 UTC

We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience Alerting failure in West Europe region.
  • Work Around: None
  • Next Update: Before 11/15 20:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Madhuri