First published on TECHNET on Jun 15, 2009
Discoveries are a critical part of management packs. Ideally the discovery should discover the objects and their properties accurately, as soon as possible and with the least amount of performance hit.
In this post I shall discuss a scenario where we had the following problems:
1) The right information was not discovered (accuracy problem)
2) There was a big lag in discovering the information. (freshness problem)
3) It had a performance hit on the machine. (performance problem)
Let us discuss the problem in a general term without mentioning the MP or the discoveries which were really affected.
For a background, let us assume a MP has class
. With class
The key property of class
and both class
have a property
that is discovered by two individual discoveries.
Both class A and B were supposed to discover the same value for P, but the main problem surfaced when the two classes showed different values for property
Upon investigation it was discovered that the two discoveries each used a WMI query to discover the property P and during a fix the discovery of P in class A was fixed whereas the discovery of P in class B was not fixed which led to the discrepancies in the values.
At this stage we had two problems; the information was not accurate and using two WMI scripts to discover the same property has a performance hit too.
So the solution at this stage was to reuse the information discovered by discovery targeting class A in class B. This would avoid second WMI query and make sure the information is accurate as there is only one script to fix.
We did not completely eliminate the WMI query as it was still needed to find the issue and since it ran only once in 24 hours, the performance hit was not that bad.
To achieve this we replaced the discovery of class B to have a datamapper module to map the property
The code to do that looks as shown below.
<ConditionDetection ID="CD" TypeID="System!System.Discovery.ClassSnapshotDataMapper">
During testing it was learnt that the above solution still did not solve the problem completely, looking into the log file it was seen that the discovery packet was being dropped at times. The reason for that was since we do not control the execution order of the workflows on the agent, if the discovery for B runs before the discovery of A has run for the first time, then the value of property P is null which makes the management server drop the discovery packet.
Even though this is not a major problem it is something to take care of, and the fix was to introduce the below condition detection module above the datamapper module. This module checks if the property P is not null and only then does the data mapping.
<ConditionDetection ID="Filter" TypeID="System!System.ExpressionFilter">
With the above condition detection, we have a good performing and accurate solution. This is good for most cases but there is an issue with freshness. If the discovery for B runs before discovery for A is run then in the worst case B would pick up the data 24 hours after discovery for A has run.
There is a solution for this problem too. Not an easy but a solution does exist. To understand how we can achieve this we need to understand the workings of operations manager. Every time the configuration of a module changes, this causes the module to be reloaded and activated. So if we can make the discovery of
reload when the property
changes, we can immediately discover the newly discovered value for property
The way we do this is by creating a new scheduler module which takes in a property as the configuration parameter (
), so that it can reload when the property changes on discovery. The code for the module looks as shown below.
<DataSourceModuleType ID="MyReloadable.Discovery.Scheduler" Accessibility="Internal" Batching="false">
<xsd:element name="Scheduler" type="PublicSchedulerType"/>
<xsd:element name="ManagedEntityId" type="xsd:string"/>