Experiencing Data Latency in Azure portal for Activity Log Alerts - 05/30 - Resolved

Published May 30 2020 05:04 AM 1,448 Views
Final Update: Saturday, 30 May 2020 14:38 UTC

We've confirmed that all systems are back to normal with no customer impact as of 05/30, 13:33 UTC. Our logs show the incident started on 05/30, 06:40 UTC and that during the 6 hours 53 minutes that it took to resolve the issue some of the customers experienced delay in activity log alerts globally.
  • Root Cause: The failure was due to one of the our back end services.
  • Incident Timeline: 6 Hours & 53 minutes - 05/30, 06:40 UTC through 05/30, 13:33 UTC
We understand that customers rely on Activity Log Alerts as a critical service and apologize for any impact this incident caused.

-Satya

Initial Update: Saturday, 30 May 2020 12:02 UTC

We are aware of issues within Activity Log Alerts and are actively investigating. Some of the customers may experience latency in getting alerts from activity log alerts globally.
  • Work Around: None
  • Next Update: Before 05/30 15:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Satya

%3CLINGO-SUB%20id%3D%22lingo-sub-1428526%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20in%20Azure%20portal%20for%20Activity%20Log%20Alerts%20-%2005%2F30%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1428526%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Saturday%2C%2030%20May%202020%2014%3A38%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2005%2F30%2C%2013%3A33%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2005%2F30%2C%2006%3A40%20UTC%20and%20that%20during%20the%206%20hours%2053%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20of%20the%20customers%20experienced%20delay%20in%20activity%20log%20alerts%20globally.%3CBR%20%2F%3E%3CUL%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20one%20of%20the%20our%20back%20end%20services.%3C%2FLI%3E%0A%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%206%20Hours%20%26amp%3B%2053%20minutes%20-%2005%2F30%2C%2006%3A40%20UTC%20through%2005%2F30%2C%2013%3A33%20UTC%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Activity%20Log%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Satya%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Saturday%2C%2030%20May%202020%2012%3A02%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20within%20Activity%20Log%20Alerts%20and%20are%20actively%20investigating.%20Some%20of%20the%20customers%20may%20experience%20latency%20in%20getting%20alerts%20from%20activity%20log%20alerts%20globally.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2005%2F30%2015%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Satya%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1428526%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EActivity%20Log%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Version history
Last update:
‎May 30 2020 07:43 AM
Updated by: