%3CLINGO-SUB%20id%3D%22lingo-sub-353409%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20in%20WEU%20-%2002%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-353409%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSTRONG%3E%20First%20published%20on%20MSDN%20on%20Feb%2015%2C%202019%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CDIV%3E%0A%3CDIV%3EFinal%20Update%3A%20Friday%2C%2015%20February%202019%2016%3A36%20UTC%20%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F15%2C%2016%3A27%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2002%2F15%2C%2012%3A30%20UTC%20and%20that%20during%20the%20~4%20hours%20that%20it%20took%20to%20resolve%20the%20issue%20small%20percentage%20of%20customers%20experienced%20a%20higher%20than%20usual%20latency%20on%20some%20percentage%20of%20their%20data.%20%3CBR%20%2F%3E%3CUL%3E%0A%3CLI%3ERoot%20Cause%3A%20The%20failure%20was%20due%20to%20higher%20usage%20of%20Azure%20SQL%20resource.%3C%2FLI%3E%0A%3CLI%3EIncident%20Timeline%3A%20~4%20Hours%20-%2002%2F15%2C%2012%3A30%20UTC%20through%2002%2F15%2C%2016%3A27%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%20%3CBR%20%2F%3E%3CBR%20%2F%3E-Naresh%3C%2FDIV%3E%0A%3CDIV%3E%0A%3CDIV%3EUpdate%3A%20Friday%2C%2015%20February%202019%2015%3A34%20UTC%20%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20Some%20customers%20in%20WEU%20continue%20to%20experience%20increased%20latency%20for%20scanning%2C%20ODS%20and%20InMem%20.%20We%20are%20working%20to%20establish%20the%20start%20time%20for%20the%20issue%2C%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2002%2F15%20~12%3A30%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution.%20%3CBR%20%2F%3E%3CUL%3E%0A%3CLI%3EWork%20Around%3A%20None%3C%2FLI%3E%0A%3CLI%3ENext%20Update%3A%20Before%2002%2F15%2018%3A00%20UTC%3C%2FLI%3E%0A%3C%2FUL%3E-Naresh%3C%2FDIV%3E%0A%3CDIV%3E%0A%3CDIV%3EInitial%20Update%3A%20Friday%2C%2015%20February%202019%2015%3A21%20UTC%20%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20in%20WEU%20may%20experience%20data%20ingestion%20latency%20into%20their%20workspaces%20beginning%20at%2012.30%20UTC.%20%3CBR%20%2F%3E%3CUL%3E%0A%3CLI%3EWork%20Around%3A%20None%3C%2FLI%3E%0A%3CLI%3ENext%20Update%3A%20Before%2002%2F15%2017%3A30%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%20%3CBR%20%2F%3E-Naresh%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%0A%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-353409%22%20slang%3D%22en-US%22%3E%3CP%3EFirst%20published%20on%20MSDN%20on%20Feb%2015%2C%202019%20Final%20Update%3A%20Friday%2C%2015%20February%202019%2016%3A36%20UTCWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F15%2C%2016%3A27%20UTC.%3C%2FP%3E%3C%2FLINGO-TEASER%3E

First published on MSDN on Feb 15, 2019

Final Update: Friday, 15 February 2019 16:36 UTC

We've confirmed that all systems are back to normal with no customer impact as of 02/15, 16:27 UTC. Our logs show the incident started on 02/15, 12:30 UTC and that during the ~4 hours that it took to resolve the issue small percentage of customers experienced a higher than usual latency on some percentage of their data.
  • Root Cause: The failure was due to higher usage of Azure SQL resource.
  • Incident Timeline: ~4 Hours - 02/15, 12:30 UTC through 02/15, 16:27 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Naresh
Update: Friday, 15 February 2019 15:34 UTC

We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers in WEU continue to experience increased latency for scanning, ODS and InMem . We are working to establish the start time for the issue, initial findings indicate that the problem began at 02/15 ~12:30 UTC. We currently have no estimate for resolution.
  • Work Around: None
  • Next Update: Before 02/15 18:00 UTC
-Naresh
Initial Update: Friday, 15 February 2019 15:21 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers in WEU may experience data ingestion latency into their workspaces beginning at 12.30 UTC.
  • Work Around: None
  • Next Update: Before 02/15 17:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Naresh