Home
%3CLINGO-SUB%20id%3D%22lingo-sub-353905%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2002%2F20%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-353905%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Thursday%2C%2014%20February%202019%2014%3A49%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F14%2C%2013%3A10%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2002%2F14%2C%2008%3A44%20UTC%20and%20that%20during%20the%204%20hours%20%26amp%3B%2026%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20customers%20experienced%20Data%20Latency%20in%20EUS.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20backend%20cluster%20becoming%20unhealthy.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%204%20Hours%20%26amp%3B%2026%20minutes%20-%2002%2F14%2C%2008%3A44%20UTC%20through%2002%2F14%2C%2013%3A10%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Naresh%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EUpdate%3C%2FU%3E%3A%20Thursday%2C%2014%20February%202019%2012%3A34%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20continue%20to%20investigate%20issues%20within%20Log%20Analytics%20.%20Data%20Access%20issue%20has%20now%20been%20fixed.%20However%2C%20some%20customers%20may%20still%20continue%20to%20experience%20Data%20Latency%20in%20EUS.%20Root%20cause%20is%20not%20fully%20understood%20at%20this%20time.%20We%20are%20working%20to%20establish%20the%20start%20time%20for%20the%20issue%2C%20initial%20findings%20indicate%20that%20the%20problem%20began%20at%2002%2F14%2008%3A44%20UTC.%20We%20currently%20have%20no%20estimate%20for%20resolution%20for%20Data%20Latency.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2002%2F14%2017%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E-Madhuri%20Poloju%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Thursday%2C%2014%20February%202019%2010%3A23%20UTC%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Log%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20in%20EUS%20may%20experience%20Data%20Access%20and%20Data%20Latency.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2002%2F14%2012%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-%20%3CSPAN%20style%3D%22display%3A%20inline%20!important%3B%20float%3A%20none%3B%20background-color%3A%20rgb(255%2C%20255%2C%20255)%3B%20color%3A%20rgb(0%2C%200%2C%200)%3B%20font-family%3A%20Segoe%20UI%3B%20font-size%3A%2014px%3B%20font-style%3A%20normal%3B%20font-variant%3A%20normal%3B%20font-weight%3A%20400%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20left%3B%20text-decoration%3A%20none%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20-webkit-text-stroke-width%3A%200px%3B%20white-space%3A%20normal%3B%20word-spacing%3A%200px%3B%22%3EMadhuri%20Poloju%3C%2FSPAN%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-353905%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Thursday, 14 February 2019 14:49 UTC

We've confirmed that all systems are back to normal with no customer impact as of 02/14, 13:10 UTC. Our logs show the incident started on 02/14, 08:44 UTC and that during the 4 hours & 26 minutes that it took to resolve the issue some of customers experienced Data Latency in EUS.
  • Root Cause: The failure was due to a backend cluster becoming unhealthy.
  • Incident Timeline: 4 Hours & 26 minutes - 02/14, 08:44 UTC through 02/14, 13:10 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Naresh

Update: Thursday, 14 February 2019 12:34 UTC

We continue to investigate issues within Log Analytics . Data Access issue has now been fixed. However, some customers may still continue to experience Data Latency in EUS. Root cause is not fully understood at this time. We are working to establish the start time for the issue, initial findings indicate that the problem began at 02/14 08:44 UTC. We currently have no estimate for resolution for Data Latency.
  • Work Around: None
  • Next Update: Before 02/14 17:00 UTC
-Madhuri Poloju

Initial Update: Thursday, 14 February 2019 10:23 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers in EUS may experience Data Access and Data Latency.
  • Work Around: None
  • Next Update: Before 02/14 12:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
- Madhuri Poloju