Home
%3CLINGO-SUB%20id%3D%22lingo-sub-827427%22%20slang%3D%22en-US%22%3EExperiencing%20Issues%20with%20Azure%20Log%20Analytics%20Service%20in%20East%20US%20-%2008%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-827427%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Wednesday%2C%2028%20August%202019%2020%3A55%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20as%20of%2008%2F28%2C%2020%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2008%2F28%2C%2014%3A50%20UTC%20and%20that%20during%205%20hours%20and%2010%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%26nbsp%3Ba%20subset%20of%20customers%20might%20have%20experienced%20issue%20in%20Log%20Search%20Alerts%20as%20well%20as%20querying%20the%20data%20in%20East%20US%20Region.%20%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%26nbsp%3BThe%20failure%20was%20due%20to%20load%20on%20our%20storage%20system%20which%20is%20responsible%20for%20data%20ingestion%20and%20query%20system.%26nbsp%3B%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%205%20Hours%20%26amp%3B%2010%20minutes%20-%2008%2F28%2C%2014%3A50%20UTC%20through%2008%2F28%2C%2020%3A00%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Log%20Search%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jayadev%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Wednesday%2C%2028%20August%202019%2016%3A34%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Azure%20log%20Analytics%20Services%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20issues%20related%20to%20configured%20log%20search%20alerts%20and%20querying%20data%20in%20East%20US%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%3CNONE%20details%3D%22%22%20or%3D%22%22%3E%3C%2FNONE%3E%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2008%2F28%2021%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Jayadev%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-827427%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ELog%20Search%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 28 August 2019 20:55 UTC

We've confirmed that all systems are back to normal as of 08/28, 20:00 UTC. Our logs show the incident started on 08/28, 14:50 UTC and that during 5 hours and 10 minutes that it took to resolve the issue a subset of customers might have experienced issue in Log Search Alerts as well as querying the data in East US Region.
  • Root Cause: The failure was due to load on our storage system which is responsible for data ingestion and query system. 
  • Incident Timeline: 5 Hours & 10 minutes - 08/28, 14:50 UTC through 08/28, 20:00 UTC
We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused.

-Jayadev

Initial Update: Wednesday, 28 August 2019 16:34 UTC

We are aware of issues within Azure log Analytics Services and are actively investigating. Some customers may experience issues related to configured log search alerts and querying data in East US region.
  • Work Around:
  • Next Update: Before 08/28 21:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Jayadev