%3CLINGO-SUB%20id%3D%22lingo-sub-1175240%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Latency%20in%20Azure%20portal%20for%20Activity%20Log%20Alerts%20-%2002%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1175240%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%20spellcheck%3D%22false%22%3E%3CU%20spellcheck%3D%22false%22%3EFinal%20Update%3C%2FU%3E%3A%20Saturday%2C%2015%20February%202020%2001%3A53%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F15%2C%2001%3A00%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2002%2F14%2C%2023%3A30%20UTC%20and%20that%20during%20the%2090%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20Azure%20customers%20experienced%20latency%20in%20Activity%20Log%20Alerts.%3CBR%20%2F%3E%3CUL%20spellcheck%3D%22false%22%3E%0A%20%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20root%20cause%20has%20been%20isolated%20to%20a%20backup%20in%20a%20storage%20queue%20which%20caused%20alert%20latency.%3C%2FLI%3E%0A%20%20%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hours%20%26amp%3B%2030%20minutes%20-%2002%2F14%2C%2023%3A30%20UTC%20through%2002%2F15%2C%2001%3A00%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%0A%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Activity%20Log%20Alerts%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Jack%20Cantwell%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1175240%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EActivity%20Log%20Alerts%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E%3CLINGO-SUB%20id%3D%22lingo-sub-1175813%22%20slang%3D%22en-US%22%3ERe%3A%20Experiencing%20Data%20Latency%20in%20Azure%20portal%20for%20Activity%20Log%20Alerts%20-%2002%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1175813%22%20slang%3D%22en-US%22%3E%3CP%3ECould%20this%20also%20impact%20%22Sign-ins%22%3F%20We%20are%20experiencing%20throtteling%20errors%20for%20two%20days%20now%20when%20checking%20the%20sign-ins%20logs%20on%20Azure%20Active%20Directory.%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-1176506%22%20slang%3D%22en-US%22%3ERe%3A%20Experiencing%20Data%20Latency%20in%20Azure%20portal%20for%20Activity%20Log%20Alerts%20-%2002%2F15%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1176506%22%20slang%3D%22en-US%22%3E%3CP%3EHi%20Bart%2C%3C%2FP%3E%0A%3CP%3EThis%20issue%20relates%20to%20Activity%20Log%20Alerts%20not%20to%20AAD.%3C%2FP%3E%0A%3CP%3EPlease%20open%20a%20ticket%20if%20you%20still%20encounter%20issues.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E
Final Update: Saturday, 15 February 2020 01:53 UTC

We've confirmed that all systems are back to normal with no customer impact as of 02/15, 01:00 UTC. Our logs show the incident started on 02/14, 23:30 UTC and that during the 90 minutes that it took to resolve the issue some Azure customers experienced latency in Activity Log Alerts.
  • Root Cause: The root cause has been isolated to a backup in a storage queue which caused alert latency.
  • Incident Timeline: 1 Hours & 30 minutes - 02/14, 23:30 UTC through 02/15, 01:00 UTC
We understand that customers rely on Activity Log Alerts as a critical service and apologize for any impact this incident caused.

-Jack Cantwell

2 Comments
Occasional Contributor

Could this also impact "Sign-ins"? We are experiencing throtteling errors for two days now when checking the sign-ins logs on Azure Active Directory. 

Microsoft

Hi Bart,

This issue relates to Activity Log Alerts not to AAD.

Please open a ticket if you still encounter issues.