%3CLINGO-SUB%20id%3D%22lingo-sub-353408%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issues%20for%20WEU%20in%20Azure%20and%20OMS%20portal%20for%20Log%20Analytics%20-%2002%2F14%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-353408%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3E%20First%20published%20on%20MSDN%20on%20Feb%2014%2C%202019%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CDIV%3E%0A%20%20%20%3CDIV%3EFinal%20Update%3A%20Thursday%2C%2014%20February%202019%2018%3A33%20UTC%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%3C%2FDIV%3E%0A%20%20%20%3CDIV%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F14%2C%2016%3A30%20UTC.%26nbsp%3B%20Our%20logs%20show%20the%20incident%20started%20on%2002%2F14%2C%2013%3A30%20UTC%20and%20that%20during%203%20hours%20that%20it%20took%20to%20resolve%20the%20issue%2C%2010%25%20of%20data%20to%20be%20ingested%20in%20West%20Europe%20region%20may%20have%20experienced%20data%20latency%20for%20all%20customers%20in%20West%20Europe%20region%20for%20Azure%20Log%20Analytics%20service.%3C%2FDIV%3E%0A%20%20%20%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%0A%20%20%20%3CDIV%3E%0A%20%20%20%20%3CUL%3E%0A%20%20%20%20%20%3CLI%3ERoot%20Cause%3A%20The%20failure%20was%20due%20to%20an%20issue%20with%20one%20of%20our%20backend%20services..%3C%2FLI%3E%0A%20%20%20%20%20%3CLI%3EIncident%20Timeline%3A%203%20Hours%20-%2002%2F14%2C%2013%3A30%20UTC%20through%2002%2F14%2C%2016%3A30%20UTC%3C%2FLI%3E%0A%20%20%20%20%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20-Deepesh%20%3CBR%20%2F%3E%3C%2FDIV%3E%0A%20%20%3C%2FDIV%3E%0A%20%0A%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-353408%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20MSDN%20on%20Feb%2014%2C%202019%20Final%20Update%3A%20Thursday%2C%2014%20February%202019%2018%3A33%20UTCWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F14%2C%2016%3A30%20UTC.%3C%2FLINGO-TEASER%3E
First published on MSDN on Feb 14, 2019
Final Update: Thursday, 14 February 2019 18:33 UTC

We've confirmed that all systems are back to normal with no customer impact as of 02/14, 16:30 UTC.  Our logs show the incident started on 02/14, 13:30 UTC and that during 3 hours that it took to resolve the issue, 10% of data to be ingested in West Europe region may have experienced data latency for all customers in West Europe region for Azure Log Analytics service.

  • Root Cause: The failure was due to an issue with one of our backend services..
  • Incident Timeline: 3 Hours - 02/14, 13:30 UTC through 02/14, 16:30 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Deepesh