Home
%3CLINGO-SUB%20id%3D%22lingo-sub-476488%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20issue%20in%20Azure%20Portal%20-%2004%2F19%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-476488%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2019%20April%202019%2022%3A15%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%204%2F19%2C%2021%3A55%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%204%2F19%2C%2019%3A30%20UTC%20and%20that%20at%20the%20peak%2C%20up%20to%208%25%20of%20queries%20against%20workspaces%20in%20East%20US%20would%20have%20experienced%20failures.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20a%20backend%20service%20entering%20an%20unhealthy%20state%2C%20this%20service%20was%20restarted.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2025%20minutes%20-%204%2F19%2C%2019%3A30%20UTC%20through%204%2F19%2C%2022%3A15%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Monitor%20services%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Matthew%20Cosner%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EInitial%20Update%3C%2FU%3E%3A%20Friday%2C%2019%20April%202019%2021%3A03%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20are%20aware%20of%20issues%20querying%20data%20through%20Log%20Analytics%20workspaces%20in%20East%20US%20starting%20around%2019%3A30%20UTC%20and%20are%20actively%20investigating.%20Approximately%208%25%20of%20queries%20in%20this%20region%20are%20impacted.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%20None.%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2004%2F19%2023%3A30%20UTC%3C%2FLI%3E%3C%2FUL%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3CBR%20%2F%3E-Matthew%20Cosner%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-476488%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 19 April 2019 22:15 UTC

We've confirmed that all systems are back to normal with no customer impact as of 4/19, 21:55 UTC. Our logs show the incident started on 4/19, 19:30 UTC and that at the peak, up to 8% of queries against workspaces in East US would have experienced failures.
  • Root Cause: The failure was due to a backend service entering an unhealthy state, this service was restarted.
  • Incident Timeline: 2 Hours & 25 minutes - 4/19, 19:30 UTC through 4/19, 22:15 UTC
We understand that customers rely on Azure Monitor services and apologize for any impact this incident caused.

-Matthew Cosner

Initial Update: Friday, 19 April 2019 21:03 UTC

We are aware of issues querying data through Log Analytics workspaces in East US starting around 19:30 UTC and are actively investigating. Approximately 8% of queries in this region are impacted.
  • Work Around: None.
  • Next Update: Before 04/19 23:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Matthew Cosner