10-03-2018 04:33 AM
10-03-2018 04:33 AM
We're starting our journey from SCOM to Azure Monitor and have run into an issue with Azure Alerts (sorry for posting this in Azure Log Analytics, but there is no Azure Monitor Tech Community).
I've noticed that when an Azure Alert is generated, that the Monitor Condition never changes from "Fired" to "Resolved". According to the documentation, the Monitor Condition, "Indicates whether the condition that created a metric alert has been resolved. Metric alert rules sample a particular metric at regular intervals. If the criteria in the alert rule is met, then a new alert is created with a condition of "fired." When the metric is sampled again, if the criteria is still met, then nothing happens. If the criteria is not met, then the condition of the alert is changed to "resolved." The next time that the criteria is met, another alert is created with a condition of "fired.""
Despite the condition no longer being met (for instance, a service down), the Monitor Condition never changes. Am I missing something?
10-03-2018 04:49 AM
Did see this in Yammer: https://www.yammer.com/azureadvisors/#/Threads/show?threadId=1090097679
10-03-2018 06:17 AM
I read Vijay's response about query based alerts, and I don't understand the logic. Added another question to him on that Yammer thread.
Thanks for bringing that up.
08-06-2019 05:16 AM
Did anything ever come of this? I'm seeing this behavior right now with V2 (non-classic) log-search-based alerts. I can't access the yammer thread, any info would be appreciated.
08-08-2019 05:56 AM
I was out of the office for the last 6 months, and don't know what the status is yet. While I'm trying to get an answer, you are also welcome to contact the product manager of this here: Yaniv.Lavi@microsoft.com
Thanks for bringing this up again
08-12-2019 05:51 AM
Thanks! so far the answer I have is that the alerts ownership is in transition, and we should ping again in a few weeks.
09-09-2019 02:00 AM
Wanted to share that I've contacted the alert management PMs again, and this is a high priority on their list. They want to solve the issue holistically for all alerts types, as alert state management is identified as a source of issues with other alert types as well (not only log based).
We will keep updating when a clear timeline is available .
11-25-2019 12:57 AM
I posted a similar question on Stackoverflow and with a little help I found the reason of my issue. I changed the aggregation count from "Count" to "Total" and that resolved the alert.
11-25-2019 03:55 AM - edited 11-25-2019 04:00 AM
Thanks @andersbaumann !
Great to see you were able to adjust the query and find a solution. Harel's answer is relevant only to metric alerts, and the problem with resolving log based alerts is still on going.
The answer I got from Yaniv Lavi (Yaniv.Lavi@microsoft.com) is they're hoping to fix in in the next semester, but it's not sure yet.
12-13-2019 12:56 AM
@Noa Kuperberg Hi,
What's the status of this? I am trying to configure alerts for failed automation runbooks and I am seeing the same behavior. The alert fires only once/resource, and remains "Fired". I tried both Total and Count aggregations with the same outcome.