%3CLINGO-SUB%20id%3D%22lingo-sub-2112490%22%20slang%3D%22en-US%22%3EExperiencing%20failures%20querying%20and%20ingesting%20data%20for%20Azure%20Monitor%20Service-%2002%2F03%20-%20Resolved%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2112490%22%20slang%3D%22en-US%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CDIV%20style%3D%22font-size%3A14px%3B%22%3E%3CU%3EFinal%20Update%3C%2FU%3E%3A%20Wednesday%2C%2003%20February%202021%2011%3A30%20UTC%3CBR%20%2F%3E%3CBR%20%2F%3EWe've%20confirmed%20that%20all%20systems%20are%20back%20to%20normal%20with%20no%20customer%20impact%20as%20of%2002%2F03%2C%2011%3A10%20UTC.%20Our%20logs%20show%20the%20incident%20started%20on%2002%2F03%2C%2008%3A45%20UTC%20and%20that%20during%20the%202%20hours%20and%2025%20minutes%20that%20it%20took%20to%20resolve%20the%20issue%20some%26nbsp%3Bcustomers%20using%20Azure%20Monitor%20service%20may%20have%20experienced%20difficulties%20ingesting%20%2Cquerying%20resource%20data%20and%20data%20ingestion%20latency%20in%20West%20Europe%20and%20South%20East%20Australia.%20Additionally%2C%20customers%20may%20also%20have%20experienced%20false%20positive%20and%2For%20missed%20alerts%20in%20these%20regions.%20Service%20management%20operations%20on%20workspaces%20may%20also%20have%20failed%2C%20retries%20may%20have%20been%20successful.%3CBR%20%2F%3E%3CUL%3E%3CLI%3E%3CU%3ERoot%20Cause%3C%2FU%3E%3A%20The%20failure%20was%20due%20to%20faulty%20deployment%20in%20one%20of%20our%20dependent%20service.%3C%2FLI%3E%3CLI%3E%3CU%3EIncident%20Timeline%3C%2FU%3E%3A%202%20Hours%20%26amp%3B%2025%20minutes%20-%2002%2F03%2C%2008%3A45%20UTC%20through%2002%2F03%2C%2011%3A10%20UTC%3CBR%20%2F%3E%3C%2FLI%3E%3C%2FUL%3EWe%20understand%20that%20customers%20rely%20on%26nbsp%3B%26nbsp%3BAzure%20Monitor%20Service%26nbsp%3Bas%20a%20critical%20service%20and%20apologize%20for%20any%20impact%20this%20incident%20caused.%3CBR%20%2F%3E%3CBR%20%2F%3E-Mohini%3CBR%20%2F%3E%3C%2FDIV%3E%3CHR%20style%3D%22border-top-color%3Alightgray%22%20%2F%3E%3C%2FDIV%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2112490%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Log%20Analytics%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Final Update: Wednesday, 03 February 2021 11:30 UTC

We've confirmed that all systems are back to normal with no customer impact as of 02/03, 11:10 UTC. Our logs show the incident started on 02/03, 08:45 UTC and that during the 2 hours and 25 minutes that it took to resolve the issue some customers using Azure Monitor service may have experienced difficulties ingesting ,querying resource data and data ingestion latency in West Europe and South East Australia. Additionally, customers may also have experienced false positive and/or missed alerts in these regions. Service management operations on workspaces may also have failed, retries may have been successful.
  • Root Cause: The failure was due to faulty deployment in one of our dependent service.
  • Incident Timeline: 2 Hours & 25 minutes - 02/03, 08:45 UTC through 02/03, 11:10 UTC
We understand that customers rely on  Azure Monitor Service as a critical service and apologize for any impact this incident caused.

-Mohini