%3CLINGO-SUB%20id%3D%22lingo-sub-749066%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20issue%20in%20Azure%20Portal%20for%20Log%20Analytics%20in%20East%20US%20-%2007%2F10%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-749066%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3E%3C%2FU%3EFinal%20Update%3A%20Wednesday%2C%2010%20July%202019%2022%3A26%20UTC%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%207%2F10%2C%2021%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%207%2F10%2C%2007%3A40%20UTC%20and%20that%20during%20the%2013%20hours%20and%2020%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%2082.8%25%20of%20customers%20in%20East%20US%20region%20have%20experienced%20an%20increased%20latency%20greater%20than%201%20hour.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3ERoot%20Cause%3A%20The%20failure%20was%20due%20to%20the%20issue%20with%20one%20of%20our%20backend%20service.%3CBR%20%2F%3EIncident%20Timeline%3A%2013%20Hours%2020%20minutes%20-%207%2F10%2C%2007%3A40%20UTC%20through%207%2F10%2C%2021%3A00%20UTC%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CBR%20%2F%3E%3C%2FDIV%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E-Leela%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-749066%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 10 July 2019 22:26 UTC

We've confirmed that all systems are back to normal with no customer impact as of 7/10, 21:00 UTC. Our logs show the incident started on 7/10, 07:40 UTC and that during the 13 hours and 20 minutes that it took to resolve the issue 82.8% of customers in East US region have experienced an increased latency greater than 1 hour.

Root Cause: The failure was due to the issue with one of our backend service.
Incident Timeline: 13 Hours 20 minutes - 7/10, 07:40 UTC through 7/10, 21:00 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Leela