Home
%3CLINGO-SUB%20id%3D%22lingo-sub-653750%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2005%2F28%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-653750%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Tuesday%2C%2028%20May%202019%2021%3A19%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2005%2F28%2C%2021%3A19%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2005%2F28%2C%2019%3A10%20UTC%20and%20that%20during%20the%2030%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20~4%25%20of%20customers%20in%20Application%20Insights%20and%20Azure%20Log%20Analytics%20might%20have%20experienced%20data%20access%20and%20data%20latency%20issues.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20Azure%20storage%20issue.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%26nbsp%3B%2030%20minutes%20-%2005%2F28%2C%2019%3A10%20UTC%20through%2005%2F28%2C%2019%3A40%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20and%20Azure%20Log%20Analytics%20as%26nbsp%3B%20critical%20services%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Sindhu%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%0A%0A%3CDIV%20align%3D%22center%22%3E%0A%0A%3CTABLE%20width%3D%22100%25%22%20style%3D%22width%3A100.0%25%3Bborder-collapse%3Acollapse%3Bmso-yfti-tbllook%3A1184%3Bmso-padding-alt%3A%0A%200in%200in%200in%200in%22%20border%3D%220%22%20cellspacing%3D%220%22%20cellpadding%3D%220%22%3E%3CTBODY%3E%3CTR%20style%3D%22mso-yfti-irow%3A0%3Bmso-yfti-firstrow%3Ayes%3Bmso-yfti-lastrow%3Ayes%22%3E%3CTD%20style%3D%22padding%3A0in%200in%200in%200in%22%3E%3CP%20align%3D%22center%22%20style%3D%22text-align%3Acenter%22%3E%3CSPAN%20style%3D%22font-family%3A%26quot%3BSegoe%20UI%26quot%3B%2Csans-serif%3Bcolor%3Awindowtext%22%3E%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%20%20%3C%2FTD%3E%0A%20%3C%2FTR%3E%0A%3C%2FTBODY%3E%3C%2FTABLE%3E%0A%0A%3C%2FDIV%3E%3CB%3E%3C%2FB%3E%3CI%3E%3C%2FI%3E%3CU%3E%3C%2FU%3E%3CSUB%3E%3C%2FSUB%3E%3CSUP%3E%3C%2FSUP%3E%3CSTRIKE%3E%3C%2FSTRIKE%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-653750%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Tuesday, 28 May 2019 21:19 UTC

We've confirmed that all systems are back to normal with no customer impact as of 05/28, 21:19 UTC. Our logs show the incident started on 05/28, 19:10 UTC and that during the 30 minutes that it took to resolve the issue ~4% of customers in Application Insights and Azure Log Analytics might have experienced data access and data latency issues.
  • Root Cause: The failure was due to Azure storage issue.
  • Incident Timeline:  30 minutes - 05/28, 19:10 UTC through 05/28, 19:40 UTC
We understand that customers rely on Application Insights and Azure Log Analytics as  critical services and apologize for any impact this incident caused.

-Sindhu