Azure Log Analytics
266 TopicsExperiencing Data Latency Issue in Azure portal for Azure Monitors - 12/12 - Resolved
Final Update: Thursday, 12 December 2019 08:47 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/12, 05:43 UTC. Our logs show the incident started on 12/12, 02:30 UTC and that during the 3 hours 13 minutes that it took to resolve the issue some of the customers in South East Asia Region may had experienced data access issue in the azure portal and queries from API might had failed for Log analytics, might had experienced alert failures and latency in alerting for Log Search alerts. Root Cause: The failure was due to an issue with one of our dependency service. Incident Timeline: 3 Hours & 13 minutes - 12/12, 02:30 UTC through 12/12, 05:43 UTC We understand that customers rely on Log Analytics and Log Search Alerts as a critical services and apologize for any impact this incident caused. -Monish Update: Thursday, 12 December 2019 08:11 UTC Root cause has been isolated to failure in the dependent service which was impacting the Log Analytics and Log Search Alerts clusters hosted in the South East Asia region. Some customers may experience may experience data access issue and might not receive expected alerts or may experience increased latency in receiving these alerts. Work Around: None Next Update: Before 12/12 13:30 UTC -Anusha Initial Update: Thursday, 12 December 2019 05:06 UTC We are aware of issues within Log Analytics and Log Search Alerts and are actively investigating. Some customers in south East Asia Region may experience data access issue and might not receive expected alerts or may experience increased latency in receiving these alerts. Work Around: None Next Update: Before 12/12 08:30 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Anusha37KViews0likes0CommentsExperiencing Data Access Issue in Azure portal for Log Analytics - 12/10 - Resolved
Final Update: Tuesday, 10 December 2019 10:38 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/10, 09:35 UTC. Our logs show the incident started on 12/10, 09:10 UTC and that during the 25 minutes that it took to resolve the issue some customers in Australia South East region may have experienced data access issue in the azure portal and query failure. Root Cause: The failure was due to an issue with one of our backend service. Incident Timeline: 25 minutes - 12/10, 09:10 UTC through 12/10, 09:35 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Anusha20KViews0likes0CommentsExperiencing Data Access Issue in Azure portal for Log Analytics - 12/12 - Resolved
Final Update: Thursday, 12 December 2019 08:57 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/12, 08:30 UTC. Our logs show the incident started on 12/12, 07:55 UTC and that during the 35 minutes that it took to resolve the issue 39% of customers in South East Australia Region experienced data access issue in the azure portal and queries from API may have failed. Root Cause: The failure was due to an issue with one of our dependent service. Incident Timeline: 35 minutes - 12/12, 07:55 UTC through 12/12, 08:30 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Monish20KViews0likes0CommentsExperiencing Alerting failure for Metric Alerts in Log Analytics Workspaces - 12/09 - Resolved
Final Update: Monday, 09 December 2019 18:43 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/09, 18:34 UTC. Our logs show the incident started on 12/09, 17:27 UTC and that during the 1 hour and 7 minutes that it took to resolve the issue customers could have experienced missing alerts in West Europe. Root Cause: The failure was due to a bad configuration Incident Timeline: 1 Hour & 7 minutes - 12/09, 17:27 UTC through 12/09, 18:34 UTC We understand that customers rely on Metric Alerts as a critical service and apologize for any impact this incident caused. -Eric Singleton Update: Monday, 09 December 2019 18:08 UTC Root cause has been isolated to a bad configuration which was impacting alerting. To address this issue we scaled out the deployment. Some customers may experience missing alerts in the West Europe Region. Work Around: none Next Update: Before 12/09 22:30 UTC -Eric Singleton18KViews0likes0CommentsExperiencing Data Access Issue in Azure portal for Log Analytics - 12/12 - Resolved
Final Update: Thursday, 12 December 2019 13:10 UTC We've confirmed that all systems are back to normal with no customer impact as of 12/12, 12:52 UTC. Our logs show the incident started on 12/12, 09:10 UTC and that during the 3 Hours & 42 minutes that it took to resolve the issue some of the customers may have experienced higher than expected latency or failures regarding metric alerts in East US region. Root Cause: The failure was due to bad deployment in one of our backend service. Incident Timeline: 3 Hours & 42 minutes - 12/12, 09:10 UTC through 12/12, 12:52 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Anusha Update: Thursday, 12 December 2019 10:32 UTC We continue to investigate issues within Log Analytics. Root cause is not fully understood at this time. Some customers may have experienced higher than expected latency or failures regarding metric alerts in East US region . Initial findings indicate that the problem began at 12/12 09:10 UTC. We currently have no estimate for resolution. Work Around: None Next Update: Before 12/12 16:00 UTC -Anusha18KViews0likes0CommentsExperiencing Issues for Azure Monitor services in West Europe - 11/07 - Resolved
Final Update: Thursday, 07 November 2019 12:29 UTC We've confirmed that all systems are back to normal with no customer impact as of 11/07, 12:50 UTC. Our logs show the incident started on 11/06, 21:10 UTC and that during the 15 hours and 40 minutes that it took to resolve the issue some customers using Azure Monitor Services in West Europe may have experienced error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may have also seen Ingestion delays and latency.. Root Cause: The failure was due to azure storage outage in West Europe region. Incident Timeline: 15 Hours & 40 minutes - 11/06, 21:10 UTC through 11/07, 12:50 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Mohini Update: Thursday, 07 November 2019 08:20 UTC We continue to investigate issues within Azure monitor services. This issue started since 11/06/2019 21.10 UTC and is caused by our dependency storage system. Our team has been investigating this with partner Azure storage team but we do not have any root cause identified yet. Customers using Azure Monitor Services in West Europe may experience error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may also experience Ingestion delays and latency. We provide an update as we learn. Work Around: None Next Update: Before 11/07 12:30 UTC -Mohini Update: Thursday, 07 November 2019 04:55 UTC We continue to investigate issues within Azure monitor services. This issue started since 11/06/2019 21.10 UTC and is caused by our dependency storage system. Our team has been investigating this with partner Azure storage team but we do not have any root cause identified yet. Customers using Azure Monitor Services in West Europe may experience error while querying or/and ingesting data along with alerts failures or latent alerts. Customers who are using Service Map in West Europe may also experience Ingestion delays and latency. We provide an update as we learn. Work Around: None Next Update: Before 11/07 08:00 UTC -Mohini Initial Update: Thursday, 07 November 2019 00:49 UTC We are aware of issues within Log Analytics and are actively investigating. Some customers may experience data access issues for Log Analytics and also issues with Log alerts not being triggered as expected in West Europe region. Work Around: None Next Update: Before 11/07 03:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Sindhu4.8KViews0likes3CommentsExperiencing Data Latency issue in Azure Portal for Many Data Types - 09/03 - Resolved
Final Update: Thursday, 03 September 2020 22:36 UTC We've confirmed that all systems are back to normal with no customer impact as of 09/03, 21:56 UTC. Our logs show the incident started on 09/03, 09:30 UTC and that during the 12 hours and 26 minutes that it took to resolve the issue some customers might have experienced intermittent data latency, data gaps and incorrect alert activation in West Europe region. Root Cause: The issue was due to hardware failure in one of the region's data centers affecting the network. Incident Timeline: 12 Hours & 26 minutes - 09/03, 09:30 UTC through 09/03, 21:56 UTC We understand that customers rely on Application Insights and Azure Log Analytics as critical services and apologize for any impact this incident caused. -Saika Update: Thursday, 03 September 2020 20:00 UTC Root cause has been isolated to a hardware failure in one of the region's data centers affecting the network and systems are still catching up to get back to normal. Some customers may still continue to experience intermittent data latency, data gaps and incorrect alert activation in West Europe region. Next Update: Before 09/04 00:00 UTC -Saika Update: Thursday, 03 September 2020 15:30 UTC We continue to investigate issues within Application Insights and Azure Log Analytics. Some customers continue to experience intermittent data latency, data gaps and incorrect alert activation in West Europe region. The initial findings suggest that the issue began at 09/03 ~09:30 UTC. Work Around: None Next Update: Before 09/03 20:30 UTC -Sandeep Update: Thursday, 03 September 2020 12:38 UTC We continue to investigate issues within Application Insights and Azure Log Analytics in West Europe region. Root cause is not fully understood at this time. Some customers continue to experience intermittent data latency, data gaps and incorrect alert activation. We are working to establish the start time for the issue, initial findings indicate that the problem began at 09/03 ~09:30 UTC. We currently have no estimate for resolution. Work Around: None Next Update: Before 09/03 15:00 UTC -Rama Initial Update: Thursday, 03 September 2020 10:34 UTC We are aware of issues with Data Latency within Application Insights and Log Analytics in West Europe region and are actively investigating. Some customers may experience intermittent data latency, data gaps and incorrect alert activation. Work Around: None Next Update: Before 09/03 14:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Sandeep3.8KViews1like4CommentsExperiencing Data Access Issues for Log Analytics in West Europe - 11/15 - Resolved
Final Update: Friday, 15 November 2019 16:49 UTC We've confirmed that all systems are back to normal with no customer impact as of 11/15, 15:56 UTC. Our logs show the incident started on 11/15, 14:20 UTC and that during 1 hour 36 minutes that it took to resolve the issue subset of customers in West Europe might have experienced data access issues while accessing data for Log Analytics and latency in Log Search Alerts. Root Cause: The failure was due to issues in one of our dependent services. Incident Timeline: 1 Hours & 36 minutes - 11/15, 14:20 UTC through 11/15, 15:56 UTC We understand that customers rely on Log Search Alerts as a critical service and apologize for any impact this incident caused. -Venkat Initial Update: Friday, 15 November 2019 15:53 UTC We are aware of issues within Log Search Alerts and are actively investigating. Some customers may experience Alerting failure in West Europe region. Work Around: None Next Update: Before 11/15 20:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Madhuri3.7KViews0likes0CommentsExperiencing Latency, Data Gap and Alerting failure for Azure Monitoring - 07/18 - Resolved
Final Update: Saturday, 18 July 2020 15:37 UTC We've confirmed that all systems are back to normal with no customer impact as of 07/18, 11:40 UTC. Our logs show the incident started on 07/18, 07:50 UTC and that during the 3 hours 50 minutes that it took to resolve the issue some customers may have experienced Data access, Data latency, Data Loss, incorrect Alert activation, missed or delayed Alerts and Azure Alerts created during the impact duration may have been available to be viewed with some delay in the Azure portal in multiple regions. Root Cause: The failure was due to an issue in one of our dependent services. Incident Timeline: 3 Hours & 50 minutes - 07/18, 7:50 UTC through 07/18, 11:40 UTC We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused. -Anmol Update: Saturday, 18 July 2020 11:17 UTC We continue to investigate issues within Azure Monitoring services. Some customers continue to experience Data access, Data latency and Data Loss, incorrect Alert activation, missed or delayed Alerts and Azure Alerts created during the impact duration may not be available to be viewed in the Azure portal in multiple regions. We are working to establish the start time for the issue, initial findings indicate that the problem began at 07/18 ~07:58 UTC. We currently have no estimate for resolution. Work Around: None Next Update: Before 07/18 14:30 UTC -Anmol Initial Update: Saturday, 18 July 2020 08:58 UTC We are aware of issues within Application Insights and Log Analytics and are actively investigating. Some customers may experience Data access issues in the Azure portal, Incorrect Alert Activation, Latency and Data Loss in multiple regions. Work Around: None. Next Update: Before 07/18 11:00 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Madhuri3.4KViews0likes0Comments[FairFax]-Data Ingestion and Alert Notification Issues in Azure portal - 05/24 - Mitigated
Final Update: Saturday, 25 May 2019 02:49 UTC We've confirmed that all systems are back to normal with no customer impact as of 05/25, 2:35 UTC. Our logs show the incident started on 05/24, 20:50 UTC and that during the 5 hours and 45 min that it took to resolve the issue approximately 31% of customers in FairFax(USGov) experienced delay in Data Ingestion and also some customers experienced issues with alert notifications configured based on webhooks in FairFax(USGov) Root Cause: The failure was due to issue with storage service. Incident Timeline: 5 Hours & 45 minutes - 05/24, 20:50 UTC through 05/25, 2:35 UTC We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused. -Venkat Update: Saturday, 25 May 2019 01:41 UTC We continue to investigate issues within Log Analytics and Alert Notification.Root cause has been isolated to a storage issue. Some Log Analytics customers continue to experience delay in Data Ingestion and also issues with alert notifications configured based on webhooks in FairFax(USGov). We are working to establish the start time for the issue, initial findings indicate that the problem began at 05/24 ~20:50 UTC. We currently have no estimate for resolution. Work Around: none Next Update: Before 05/25 06:00 UTC -Venkat Initial Update: Friday, 24 May 2019 22:08 UTC We are aware of issues within Log Analytics and are actively investigating. Some customers may experience Data Ingestion delays in FairFax(USGov) region. Work Around: none Next Update: Before 05/25 02:30 UTC We are working hard to resolve this issue and apologize for any inconvenience. -Venkat2.7KViews0likes0Comments