Home
%3CLINGO-SUB%20id%3D%22lingo-sub-853745%22%20slang%3D%22en-US%22%3EExperiencing%20Data%20Access%20Issue%20in%20Azure%20portal%20for%20Log%20Analytics%20in%20West%20US2%20-%2009%2F13%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-853745%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Friday%2C%2013%20September%202019%2004%3A39%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2009%2F13%2C%2003%3A35%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2009%2F13%2C%2001%3A46%20UTC%20and%20that%20during%20the%201%20hour%2049%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%20customers%20from%20West%20US2%20region%20may%20have%20experienced%20log%20processing%20delays%20with%20OMS%20workspaces%20hosted%20in%20this%20region.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20performance%20issue%20with%20one%20of%20our%20backend%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%201%20Hour%20%26amp%3B%2049%20minutes%20-%2009%2F13%2C%2001%3A46%20UTC%20through%2009%2F13%2C%2003%3A35%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%20Azure%20Log%20Analytics%20as%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Mohini%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22box-sizing%3A%20border-box%3B%20font-family%3A%20%26quot%3BSegoe%20UI%26quot%3B%2C%20%26quot%3BHelvetica%20Neue%26quot%3B%2C%20%26quot%3BApple%20Color%20Emoji%26quot%3B%2C%20%26quot%3BSegoe%20UI%20Emoji%26quot%3B%2C%20Helvetica%2C%20Arial%2C%20sans-serif%3B%20font-size%3A%2014px%3B%20font-style%3A%20normal%3B%20font-variant-ligatures%3A%20normal%3B%20font-variant-caps%3A%20normal%3B%20font-weight%3A%20400%3B%20letter-spacing%3A%20normal%3B%20orphans%3A%202%3B%20text-align%3A%20start%3B%20text-indent%3A%200px%3B%20text-transform%3A%20none%3B%20white-space%3A%20normal%3B%20widows%3A%202%3B%20word-spacing%3A%200px%3B%22%3E%3CU%20style%3D%22font-size%3A14px%3B%22%3E%3CU%20style%3D%22font-size%3A14px%3B%22%3EInitial%20Update%3C%2FU%3E%3C%2FU%3E%3CSPAN%20style%3D%22font-size%3A14px%3B%22%3E%3A%20Friday%2C%2013%20September%202019%2002%3A31%20UTC%3C%2FSPAN%3E%3CBR%20style%3D%22font-size%3A14px%3B%22%20%2F%3E%20%3CBR%20style%3D%22font-size%3A14px%3B%22%20%2F%3E%20We%20are%20aware%20of%20a%20data%20latency%20issue%20in%20West%20US2%20region%20with%20Azure%26nbsp%3BLog%20Analytics%20and%20are%20actively%20investigating.%20Some%20customers%20may%20experience%20delay%20in%20accessing%20their%20data%20and%20potentially%20receive%20delayed%20alerts%26nbsp%3B.%3CUL%20style%3D%22font-size%3A14px%3B%22%3E%0A%0A%20%0A%3CLI%3E%3CU%3EWork%20Around%3C%2FU%3E%3A%3CNONE%20or%3D%22%22%20details%3D%22%22%3E%0A%20%3C%2FNONE%3E%3C%2FLI%3E%3CLI%3E%3CU%3ENext%20Update%3C%2FU%3E%3A%20Before%2009%2F13%2007%3A00%20UTC%3C%2FLI%3E%3C%2FUL%3E%3CSPAN%20style%3D%22font-size%3A14px%3B%22%3EWe%20are%20working%20hard%20to%20resolve%20this%20issue%20and%20apologize%20for%20any%20inconvenience.%3C%2FSPAN%3E%3CBR%20style%3D%22font-size%3A14px%3B%22%20%2F%3E%20%3CSPAN%20style%3D%22font-size%3A14px%3B%22%3E-Subhash%3C%2FSPAN%3E%3C%2FDIV%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-853745%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Friday, 13 September 2019 04:39 UTC

We've confirmed that all systems are back to normal with no customer impact as of 09/13, 03:35 UTC. Our logs show the incident started on 09/13, 01:46 UTC and that during the 1 hour 49 minutes that it took to resolve the issue some customers from West US2 region may have experienced log processing delays with OMS workspaces hosted in this region.
  • Root Cause: The failure was due to performance issue with one of our backend service.
  • Incident Timeline: 1 Hour & 49 minutes - 09/13, 01:46 UTC through 09/13, 03:35 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Mohini

Initial Update: Friday, 13 September 2019 02:31 UTC

We are aware of a data latency issue in West US2 region with Azure Log Analytics and are actively investigating. Some customers may experience delay in accessing their data and potentially receive delayed alerts .
  • Work Around:
  • Next Update: Before 09/13 07:00 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Subhash