Service Map
23 TopicsExperiencing Issues for Azure Monitor services in West Europe - 11/07 - Resolved
Final Update: Thursday, 07 November 2019 12:29 UTC We've confirmed that all systems are back to normal with no customer impact as of 11/07, 12:50 UTC. Our logs show the incident started on 11/06, 21:10 UTC and that during the 15 hours and 40 minutes that it took to resolve the issue some customers using Azure Monitor Services in West Europe may have experienced error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may have also seen Ingestion delays and latency.. Root Cause: The failure was due to azure storage outage in West Europe region. Incident Timeline: 15 Hours & 40 minutes - 11/06, 21:10 UTC through 11/07, 12:50 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Mohini Update: Thursday, 07 November 2019 08:20 UTC We continue to investigate issues within Azure monitor services. This issue started since 11/06/2019 21.10 UTC and is caused by our dependency storage system. Our team has been investigating this with partner Azure storage team but we do not have any root cause identified yet. Customers using Azure Monitor Services in West Europe may experience error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may also experience Ingestion delays and latency. We provide an update as we learn. Work Around: None Next Update: Before 11/07 12:30 UTC -Mohini Update: Thursday, 07 November 2019 04:55 UTC We continue to investigate issues within Azure monitor services. This issue started since 11/06/2019 21.10 UTC and is caused by our dependency storage system. Our team has been investigating this with partner Azure storage team but we do not have any root cause identified yet. Customers using Azure Monitor Services in West Europe may experience error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may also experience Ingestion delays and latency. We provide an update as we learn. Work Around: None Next Update: Before 11/07 08:00 UTC -Mohini Initial Update: Thursday, 07 November 2019 00:49 UTC We are aware of issues within Log Analytics and are actively investigating. Some customers may experience data access issues for Log Analytics and also issues with Log alerts not being triggered as expected in West Europe region. Work Around: None Next Update: Before 11/07 03:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Sindhu4.8KViews0likes3CommentsExperiencing Data Access Issue in Azure portal for Service Map - 11/22 - Resolved
Final Update: Friday, 22 November 2019 17:15 UTC We've confirmed that all systems are back to normal as of 11/22, 16:58 UTC. Our logs show the incident started on 11/22, 08:23 UTC and that during the 8 hours 35 minutes that it took to resolve the issue 18% of customers in East US region might have encountered error messages when trying to use virtual machine Service Map data in Azure portal. Root Cause: The failure was due to issue in one of our back end services. Incident Timeline: 8 Hours & 35 minutes - 11/22, 08:23 UTC through 11/22, 16:58 UTC We understand that customers rely on Service Map as a critical service and apologize for any impact this incident caused. -Sindhu Update: Friday, 22 November 2019 16:32 UTC Root cause has been isolated to an unhealthy resource which was impacting Service Map data access in East US. The issue is currently being mitigated. The issue is expected to be fully mitigated within the next 3 hours. Work Around: None Next Update: Before 11/22 19:00 UTC -Ian Update: Friday, 22 November 2019 13:06 UTC We continue to investigate issues within Service Map. Some customers continue to experience data access issue in East US region. We are working to establish the start time for the issue, initial findings indicate that the problem began at 11/22 ~08:23 UTC. Work Around: None Next Update: Before 11/22 17:30 UTC -Naresh Initial Update: Friday, 22 November 2019 09:56 UTC We are aware of issues within Service Map and are actively investigating. Some customers may experience data access issue in East US region. Work Around: None Next Update: Before 11/22 13:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Naresh2.8KViews0likes0CommentsExperiencing Data Access Issue in Azure portal in multiple regions- 04/22 - Resolved
Final Update: Wednesday, 22 April 2020 13:51 UTC We've confirmed that all systems are back to normal with no customer impact as of 4/22, 13:45 UTC. Our logs show the incident started on 4/22, 09:55 UTC and that during the 3 hours and 50 minutes that it took to resolve the issue customers using Log Analytics, Log Search Alerts and/or Metric Alerts may have experienced Data Latency, Delayed or missed log search alerts and issue performing CRUD operations against Log Analytics Workspace and Query failure issues in East US 2, North Europe, East Australia, West US, Central US, South Central US and North Central US regions. Root Cause: The failure was due to an issue in one of our dependent services. Incident Timeline: 3 Hours & 50 minutes - 4/22, 09:55 UTC through 4/22, 13:45 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Madhuri Initial Update: Wednesday, 22 April 2020 12:13 UTC We are aware of issues within Log Analytics and are actively investigating. Some customers may experience Data Latency, Delayed or missed log search alerts and issue performing CRUD operations against Log Analytics Workspace and Query failure issues in multiple regions. Work Around: None Next Update: Before 04/22 14:30 UTC We are working hard to resolve this issue and apologize for any inconvenience. -MadhuriExperiencing issues in Azure Portal for Many Data Types in SUK- 09/14 - Resolved
Final Update: Tuesday, 15 September 2020 01:42 UTC We've confirmed that all systems are back to normal with no customer impact as of 9/15, 00:41 UTC. Our logs show the incident started on 9/14 13:54 UTC and that during the 10 hours and 47 minutes that it took to resolve the issue customers experienced data loss and data latency which may have resulted in false and missed alerts. Root Cause: The failure was due to a cooling failure at our data center that resulted in shutting down portions of the data center. Incident Timeline: 10 Hours & 47 minutes - 9/14 13:54 UTC through 9/15, 00:41 UTC We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused. -Ian Update: Tuesday, 15 September 2020 01:19 UTC Root cause has been isolated to cooling failures and subsequent shutdowns in our data center which were impacting storage and our ability to access and insert data. Our infrastructure has been brought back online. We are making progress with brining the final storage devices back online. Customers should start to see signs of recover soon. Work Around: None Next Update: Before 09/15 05:30 UTC -Ian Update: Monday, 14 September 2020 20:14 UTC Starting at approximately 14:00 UTC on 14 Sep 2020, a single Zone in UK South has experienced a cooling failure. As a result, Storage, Networking and Compute resources were shut down as part of our automated processes to preserve the equipment and prevent damage. As a result the Azure Monitoring Services have experienced missed or latent data which is causing false and missed alerts. Mitigation for the cooling failure is currently in progress. An estimated time for resolution of this issue is still unknown. We apologize for the inconvenience. Work Around: None Next Update: Before 09/15 00:30 UTC -Ian Update: Monday, 14 September 2020 16:28 UTC We continue to investigate issues within Azure Monitoring Services. Root cause is related to an ongoing storage account issue. Some customers continue to experience missed or latent data which is causing false and missed alerts. We are working to establish the start time for the issue, initial findings indicate that the problem began at 9/14 13:35 UTC. We currently have no estimate for resolution. Work Around: None Next Update: Before 09/14 19:30 UTC -Ian Initial Update: Monday, 14 September 2020 14:44 UTC We are aware of issues within Azure Monitoring Services and are actively investigating. There is an outage on storage event in UK South which caused multiple services to be impacted. Work Around: None Next Update: Before 09/14 19:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Mohini1.8KViews0likes0CommentsExperiencing Data Access issue in Azure Portal for Many Data Types - 11/12 - Resolved
Final Update: Thursday, 12 November 2020 02:14 UTC We've confirmed that all systems are back to normal with no customer impact as of 11/12, 01:20 UTC. Our logs show that the incident started on 11/11, 23:50 UTC and that during the 1 hour and 30 minutes that it took to resolve the issue some of the customers experienced issues with missed or delayed Log Search Alerts or experienced difficulties accessing data for resources hosted in West US2 and North Europe. Root Cause: The failure was due to an issue in one of our backend service. Incident Timeline: 1 Hour & 30 minutes - 11/11, 23:50 UTC through 11/12, 1:20 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Saika Initial Update: Thursday, 12 November 2020 01:35 UTC We are aware of issues within Log Analytics and are actively investigating. Some customers may experience issues with missed, delayed or wrongly fired alerts or experience difficulties accessing data for resources hosted in West US2 and North Europe. Work Around: None Next Update: Before 11/12 06:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Saika1.7KViews0likes0CommentsExperiencing Latency, Data Loss issue, and alerting issues in West Central US-03/15- Resolved
Final Update: Sunday, 15 March 2020 22:43 UTC We've confirmed that all systems are back to normal with no customer impact as of 3/15, 22:25 UTC. Our logs show the incident started on 3/15, 20:14 UTC and that during the 2 hours 11 minutes that it took to resolve the issue customers experienced latent ingestion of data and log search alerts misfiring. Root Cause: The failure was due to a utility problem at the West Central US data center. Incident Timeline: 2 Hours & 11 minutes - 3/15, 20:14 UTC through 3/15, 22:25 UTC We understand that customers rely on Application Insights and Azure log analytics as critical services and apologize for any impact this incident caused. -Jeff Update: Sunday, 15 March 2020 22:24 UTC Root cause has been isolated to a utility issue in West Central US data center which was impacting communications. To address this issue Azure teams have resolved it. Some customers may experience data still being ingested with a small amount of latency and possible misfiring alerts until the ingestion catches up. Next Update: Before 03/16 00:30 UTC -Jeff Initial Update: Sunday, 15 March 2020 21:40 UTC We are aware of issues within Application Insights and Azure Log Analytics and are actively investigating. Some customers may experience Latency and Data Loss, configuration failures for alerts, misfired alerts, and other unexpected behaviors. Next Update: Before 03/16 00:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Jeff1.6KViews0likes0CommentsExperiencing Data Access issue in Azure Portal - 12/10 - Resolved
Final Update: Thursday, 10 December 2020 22:29 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/10, 21:45 UTC. Our logs show the incident started on 12/10, 20:45 UTC and that during the 1 hour that it took to resolve the issue some of customers in East US, EastUS2, South Central US, West Central US, Central Canada, South Africa North experienced data access issues and missed/delayed log search alerts. Root Cause: The failure was due to a recent deployment in one of the backend services. Incident Timeline: 1 hour - 12/10, 20:45 UTC through 12/10, 21:45 UTC We understand that customers rely on Azure monitor as a critical service and apologize for any impact this incident caused. -Anupama Initial Update: Thursday, 10 December 2020 21:42 UTC We are aware of issues within Azure monitor services and are actively investigating. Some customers in multiple regions may experience data access issues, delayed or missed Log Search Alerts. Work Around: None Next Update: Before 12/11 00:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Anupama1.5KViews0likes0CommentsExperiencing Latency and Data Loss issue in Azure Portal for Many Data Types - 11/28 - Resolved
Final Update: Saturday, 28 November 2020 15:29 UTC We've confirmed that all systems are back to normal with no customer impact as of 11/28, 14:07 UTC. Our logs show the incident started on 11/27, 22:00 UTC and that during the 14 hours and 07 minutes that it took to resolve the issue some customers may have experienced delayed or missed Log Search Alerts, Latency and Data Loss in South Africa North region. Root Cause: The issue was due to power outage in South Africa North region data centers. Incident Timeline: 14 Hours & 07 minutes - 11/27, 22:00 UTC through 11/28, 14:07 UTC We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused. -Vyom Initial Update: Saturday, 28 November 2020 05:02 UTC We are aware of issues within Application Insights and are actively investigating. Due to power outage in data center, some customers may experience delayed or missed Log Search Alerts, Latency and Data Loss in South Africa North region. Work Around: none Next Update: Before 11/28 17:30 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Vyom1.5KViews0likes0CommentsExperiencing Data Access Issue in Azure portal for Service Map - 02/23 - Resolved
Final Update: Thursday, 25 February 2021 19:55 UTC We've confirmed that all systems are back to normal with no customer impact as of 2/25, 18:40 UTC. Our logs show the incident started on 2/15 and that during the duration of ~10 days that it took to resolve the issue some customers experienced issues seeing performance data in Azure monitor when Azure mode is selected and 5xx errors while viewing performance data on browser. Root Cause: The failure was due to one of the back end services turning unhealthy after a deployment. Incident Timeline: 2/15 through 2/25, 18:40 UTC We understand that customers rely on Azure monitor for VM's as a critical service and apologize for any impact this incident caused. -Anupama Update: Thursday, 25 February 2021 04:22 UTC Root cause has been isolated to service turning unhealthy after incorrect deployment which was impacting performance data view in Azure monitor when Azure mode is selected. To address this issue engineers are rolling back the service to previous healthy state. Some customers who have VM's sending data to workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected and 5xx errors while viewing performance data on browser. Work Around: None Next Update: Before 02/26 04:30 UTC -Anupama Update: Thursday, 25 February 2021 01:28 UTC Root cause has been isolated to service turning unhealthy after incorrect deployment which was impacting performance data view in Azure monitor when Azure mode is selected. To address this issue engineers are rolling back the service to previous healthy state. Some customers who have VM's sending data to workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected and 5xx errors while viewing performance data on browser. Work Around: None Next Update: Before 02/25 05:30 UTC -Anupama Update: Thursday, 25 February 2021 00:01 UTC Root cause has been isolated to service turning unhealthy after incorrect deployment which was impacting performance data view in Azure monitor when Azure mode is selected. To address this issue we are rolling out a fix to the service. Some customers who have VM's sending data to workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected and 5xx errors while viewing performance data on browser. Work Around: None Next Update: Before 02/25 02:30 UTC -Anupama Update: Wednesday, 24 February 2021 22:01 UTC We continue to investigate issues within Azure monitor for VM's. Preliminary investigation showed that the root cause was due to one of the back end component becoming inaccessible after a scheduled drill/test in EastUS2EUAP region. Further investigation shows that the root cause is due to regression that happened after a deployment on one of the back end services. Some customers who have VM's sending data to workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected. Customers might also see 5xx errors while viewing the performance data on the portal. Work Around: None Next Update: Before 02/25 00:30 UTC -Anupama Update: Wednesday, 24 February 2021 12:02 UTC We continue to investigate issues within Service Map. Root cause is due to one of the backend component becoming inaccessible after a scheduled drill/test in EastUS2EUAP region. Some customers who have VM's sending data to EastUS2EUAP workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected. Work Around: None Next Update: Before 02/25 00:30 UTC -Harshita Update: Wednesday, 24 February 2021 01:32 UTC We continue to investigate issues within Service Map. Root cause is due to one of the backend component becoming inaccessible after a scheduled drill/test in EastUS2EUAP region. Some customers who have VM's sending data to EastUS2EUAP workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected. Work Around: None Next Update: Before 02/24 14:00 UTC -Anupama Update: Tuesday, 23 February 2021 21:22 UTC We continue to investigate issues within Service map. Root cause is due to one of the backend component becoming inaccessible after a scheduled drill/test in EastUS2EUAP region. Some customers who have VM's sending data to EastUS2EUAP workspaces, continue to experience issues seeing performance data in Azure monitor when Azure mode is selected. The issue started at 16:00 UTC Work Around: None Next Update: Before 02/23 23:30 UTC -Anupama1.5KViews1like0Comments