%3CLINGO-SUB%20id%3D%22lingo-sub-1755881%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20for%20Many%20Data%20Types%20-%2010%2F07%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1755881%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3EFinal%20Update%3A%20Wednesday%2C%2007%20October%202020%2020%3A09%20UTC%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3E%3CBR%20%2F%3E%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2010%2F7%2C%2019%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2010%2F7%2C%20at%20approximately%2018%3A30%20UTC%20and%20that%20during%20the%2030%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20most%20Application%20Insights%20and%20Log%20Analytics%20customers%20experienced%20outages%20with%20various%20services.%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3ERoot%20Cause%3A%20The%20failure%20was%20due%20to%20a%20back-end%20networking%20issue%20that%20caused%20problems%20with%20a%20large%20number%20of%20Azure%20services.%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3EIncident%20Timeline%3A%200%20Hours%20%26amp%3B%2030%20minutes%20-%2010%2F7%2C%2018%3A30%20UTC%20through%2010%2F7%2C%2019%3A00%20UTC%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3EWe%20understand%20that%20customers%20rely%20on%20Application%20Insights%20and%20Log%20Analytics%20as%20critical%20services%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3E%3CBR%20%2F%3E%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CSPAN%20style%3D%22text-decoration-line%3A%20underline%3B%22%3E-Jack%20Cantwell%3C%2FSPAN%3E%3C%2FDIV%3E%3CDIV%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1755881%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EApplication%20Insights%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ELog%20Alerts%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EMetric%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 07 October 2020 20:09 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/7, 19:00 UTC. Our logs show the incident started on 10/7, at approximately 18:30 UTC and that during the 30 minutes that it took to resolve the issue most Application Insights and Log Analytics customers experienced outages with various services.
Root Cause: The failure was due to a back-end networking issue that caused problems with a large number of Azure services.
Incident Timeline: 0 Hours & 30 minutes - 10/7, 18:30 UTC through 10/7, 19:00 UTC
We understand that customers rely on Application Insights and Log Analytics as critical services and apologize for any impact this incident caused.

-Jack Cantwell