Alert "Monitor Condition" never changes

Iron Contributor

We're starting our journey from SCOM to Azure Monitor and have run into an issue with Azure Alerts (sorry for posting this in Azure Log Analytics, but there is no Azure Monitor Tech Community).

 

I've noticed that when an Azure Alert is generated, that the Monitor Condition never changes from "Fired" to "Resolved". According to the documentation, the Monitor Condition, "Indicates whether the condition that created a metric alert has been resolved. Metric alert rules sample a particular metric at regular intervals. If the criteria in the alert rule is met, then a new alert is created with a condition of "fired." When the metric is sampled again, if the criteria is still met, then nothing happens. If the criteria is not met, then the condition of the alert is changed to "resolved." The next time that the criteria is met, another alert is created with a condition of "fired.""

Despite the condition no longer being met (for instance, a service down), the Monitor Condition never changes. Am I missing something?

 

25 Replies
Thanks I've raised a case and have been told it's a bug so waiting on a fix to be deployed

@Anthony_W  hi, i am also facing the same issue for my project, would you mind share more details about the case you have? i am interested in the things like timeline and the team that responded :)

@changc009welcome to the club... 

 @Noa Kuperberg  Sorry for calling you here, but do you know if this ever is gonna get solved, this is kind of a deal breaker in monitoring using your tools... 
Can you also tell me who can I "call" in here that have the ownership of customer care ? or works directly with the roadmap for azure monitor. 

I see a great monitoring tool in place, but a few big gaps, that arent expected on a provider like Microsoft. 

 

This is something that still happens today on azure monitor. at least on my subscription. 

Is there anyway to solve this... ? 

 

@loadedlouie27 

I understand this is difficult to work with... the relevant contact is yaniv.lavi@microsoft.com, I'll contact him as well.

@Noa Kuperberg Thanks You for your help.
"I understand this is difficult to work with..." 

 

I woke up this morning to a total of 12652 alarms from 7 days mainly caused by the same 3 rules,
that are "searching" if a process is running...  and keep triggering themselves. 
so "difficult" is a nice word to put it... :D 

 

i had to close 6849 alarms ... from the weekend shutdown of the VM's..  :) 

So if I cant set the basic monitoring of a process, and expect a rule to work...
it's kind of hard to "trust" and keep using the Azure Monitor.  That I actually like I must say. 

 

IF YOU HAVE THE SAME PROBLEM PLEASE VOTE HERE: 

https://feedback.azure.com/forums/602299/suggestions/39989395 

 

 

Hi @loadedlouie27,

This is not a bug it is the design of log alert, which was built to find things in logs (which you can't really resolve).

 

We are planning to provide stateful log alerts, but recommend you investigate using metric alerts and/or metric alerts for logs to achieve state alerts on what you need for now.