OpsMgr 2012 Management Pack Overview(Including Best Practices)

Understanding management packs

MP defines what to



what data should be collected,

how to



 And define visual elements

                          -> dashboards

                          -> views.


Note # Authoring console works with management packs that have a v1.0 XML schema while Operations Manager 2012 has a v2.0 schema.


Terms used at different place but got the same meaning and purpose,

Console                 Authoring Tools

Target                     Class

Instance                  Object

Property                  Attribute


Note # Instance is considered a representation of a target that shares the same properties (the details) and a common means of being monitored.

Instance is discovered by targeting the parts that make up the application you want to monitor with Operations Manager. 



Singleton vs. non-singleton classes

              Class represents a single type of objects. 

              All instances of a class share common set of properties

Singleton class

> automatically created (discovered)

> no discovery rule required.


There can be only one instance of a singleton class.


Example > A group, It has only one instance (the group object itself) and is created during configuration through the Create Group Wizard or automatically or when a management pack is installed.

There is always

> a single instance of a given group, and

> groups are managed by the management servers from the All Management Servers Resource Pool.


A non-singleton class,

> can be managed either by agents or by management servers from any resource pool. > can be any number of instances of a non-singleton class.


Example> The Windows Computer class ,There needs to be as many instances of this class when there are Windows-based computers to be monitored.This class is managed by agents. Each agent installed on a Windows-based computer creates an instance of the Windows Computer class and manages it locally.


Note # Attributes of a class dictate how it is used.Two class types—singleton and non-singleton—dictate how class instances are discovered and whether they are managed by agents or by management servers of a certain resource pool.


Workflow targets

A workflow

>such as a discovery/rule/monitor/override/task

 has a certain target defined.


> Target dictates what instances a particular workflow will run on.


Example >if you create a monitor that you need to run only on computers with the Domain Controller role installed,

you select the Domain Controller role as the target for this monitor. By doing so, you ensure that this monitor will run only on domain controllers.


> Target defines which agents the management pack with this monitor is distributed to.

Note# some management packs can have embedded resources like dynamic-link libraries (DLLs) or other kinds of files that are automatically copied to the target as well.


Best practice

> Always choose as specific a class as possible to ensure that the management pack and its workflows are downloaded only on computers where they are really needed.

Example> to monitor something that exists only on a computer running SQL Server, select the SQL Database Engine class instead of a generic class like Windows Computer.

> when you create new monitors or rules, use an existing class instead of creating a new one. This keeps the type space smaller, which is better for performance.


Alternative scenario, to extend monitoring for an entire application that has no management pack available for download, create new classes that specifically describe the application model and how you intend to monitor its various parts. Even though fewer classes is better for keeping the instance space smaller, the classes you create is trivial compared to the number of workflows that run on an agent. The classes you choose also influence how parts are displayed in views, dashboards, and reports. 


Authoring classes

  > When building a class model for any application,  start with an initial, or base, class that needs to get discovered so that afterwards all the higher level classes are discovered based on that.

This ensures that the management pack is downloaded only on the agents where that application exists and also ensures that all the workflows that belong to parts of this application run only on those agents. T

Another option is to target this discovery rule to a more generic class that is a seed discovery class. This ensures that the discovery rule (workflow) that runs to discover the initial class is super lightweight, which is good for performance, and, ideally, runs on a wide interval (for instance, every 24 hours).



  > When defining classes, do not use properties that can change frequently. A configuration update is a costly operation if it happens too often and can have a significant impact on performance.

Note # This scenario is called a configuration churn and should be avoided.


Example> to monitor important folders of an application and discover these folders as classes, but do not define folder size as a property because, often, folder size changes frequently, and every time the discovery rule for this folder class runs, there will be a new value for the folder size property. This will cause a re-discovery of that class (to update the properties), and this will cause a configuration update on the agent(s) where this class is hosted.


State change events
The biggest difference between rules and monitors,


> define a state

 >inserts a state change event in the database.

Note # These entries are stored in the StateChangeEvent table in the Operational Database.

This table is used frequently in various queries used to get data from the database. The larger this table is, the slower the console becomes. A monitor with this behavior is too sensitive and most likely is not reflecting the actual state of the part it monitors.


Ideally, such a monitor should be redesigned. If redesign is not possible, the monitor should be tuned. Tuning means changing the way the monitor works via the available overrides.


Note # Even with the state change event storm feature of the management servers, which prevents a new state change from being written to the database if it is part of a storm of changes to the same monitor, state change events still impact performance. Monitors that are very sensitive and generate a lot of state change events are known as flip-flopping or noisy monitors.


Monitor initialization >

 when an agent into maintenance mode, each of its monitors generates a state change event, changing from its current state to the Not Monitored state.

                                                        In return, when an agent exits maintenance mode, each monitor it uses sends a state change event from the Not Monitored state to the Healthy state.

This happens each time a monitor starts working.

Note # This functionality is crucial to the calculation of the availability of each part being monitored.

However, it generates a significant number of state changes, and, therefore, it is best to avoid implementing scenarios where a large number of agents are put into and pulled out of maintenance mode frequently,


Then a good approach is to reduce the Database Grooming settings for state change event data as much as possible (default 7 days). As a matter of fact, it is a good idea to reduce Database Grooming settings for all data types as much as the business allows and instead rely mostly on historical data that is available in the Data Warehouse Database through dashboards and reports.


Module cookdown

A workflow (monitor, rule, and so on) contains more modules than it needs to function. Cookdown is a feature that saves memory and CPU time by re-using already loaded modules instead of loading and initializing new instances of those modules.


Type space
The type space is the total number of

>management packs, classes, relationships, resources, enumerations, monitor types, views, and other internal definitions that exist in the environment (the Operational Database).

>A copy of the type space is held in memory by each Data Access Service on each management server.

>Each time a new class, workflow, view, and so on is created, modified, or deleted in the console, the Data Access Service of each management server reloads the type space.

Note #The bigger the type space is, the longer it takes to reload. In large environments, this might significantly impact performance on the management servers until the reload is finished.

Best Practice > have more management packs separated by application


other criteria, such as one management pack containing the definitions of classes, relationships, and discovery rules


a separate management pack containing the monitors, rules, views, and so on,

than to have a very big management pack.

>Import only management packs thats needed.

>Each agent is able to handle many instances, but the impact to performance could be severe if the management group is not able to calculate configuration.

                                    In general, expect an average of 50 to 100 discovered instances hosted by an agent, which results in about 50,000 to 100,000 discovered objects, to be handled by a management group in a 1,000-agent environment.


> Impact that type space size might have when use Windows PowerShell scripts that connect to the Data Access Service to perform different actions,

            >custom maintenance

            >custom monitoring

            >automatic overrides, and so on.

Usually, such scripts consume a large portion of the type space loaded into memory from the Data Access Service, and in some situations, these scripts can load up to almost the entire type space, depending on what the script does.


Example> a rule might connect to the DataAccess Service to get the list of all monitors and then, based on some criteria, take some action either on the monitors, on the objects to which these are tied to, or maybe on the alerts these have generated. In such a scenario, you might end up loading the monitor types, classes, or other parts of the type space into the memory of the associated MonitoringHost.exe instance that is running the Windows PowerShell script. This potentially causes high CPU usage and definitely causes high memory usage of that process. 


Authoring groups
Groups are

> singleton classes that are hosted by the All Management Servers Resource Pool.


ie that management of groups is split between the management servers of this resource pool. The members of a group are dynamically calculated by workflows called Group Calculation workflows.

Static groups (groups with explicit membership) are much better for performance than dynamic groups (groups containing dynamic membership calculation rules).


Note#  dynamic groups are much more resource intensive when processed.


The more groups you have, and, specifically, the more dynamic groups, the bigger the performance impact is on the management servers of the All Management Servers Resource Pool.

Best Practice > avoid creating new dynamic groups and to instead rely on classes for targeting or other scenarios where the desired functionality can be achieved using different methods.

When dynamic groups are needed—try to use the simplest dynamic membership rules possible.


Group calculation interval
To optimize Operations Manager environment is using and tuning the group calculation interval.

Basically custom groups used for

      > scoping user role views and

      > dashboards or for filtering notifications or overrides.

Discovery rules for the groups can impact the performance of the environment because the queries create multiple read operations to the Operations Manager database. Adding many dynamic groups with complex criteria to Operations Manager can negatively impact the overall performance.
Note # Group calculations occur every 30 seconds by default.

     >  Change the group calculation interval in the registry of the management server in the key GroupCalcPollingIntervalMilliseconds.


Sealed management packs
Sealing a management pack changes it from an .xml file to an .mp file, which is a binary representation of the management pack.

> To seal a management pack, the file is digitally signed by the provider and the user knows that it hasn’t been modified since then.
> To upgrade a sealed management pack, the same key must be used or the upgrade will fail.

Note # The sealed or the unsealed version of a management pack can be added to a management group, but never at the same time.

> Sealed management packs have version control when an updated version of the management pack is imported into a management group.

ie If the management pack is sealed, only a newer version of the same management pack can be imported and only if the newer version successfully passes the backward compatibility check.

Note # For unsealed management packs, the new version is always imported regardless of its compatibility and regardless of its version.
> a management pack can reference another management pack only if the management pack that is referenced is sealed.

basically to configure typical parts that are used by other management packs, such as groups or modules, you must seal the management pack.


Summary of best practices
the list of the most important things to consider when working with management packs:
> Class properties choosen should change values as seldom as possible, close to never.
> Don’t use Operations Manager for software inventory (System Center Configuration Manager is built to do that), and don’t collect too many properties.
> Monitors should change their state as seldom as possible. They should not be too sensitive, and the related issue that is described in the alert should be resolved in a more permanent manner.
> The type space should be kept as small as possible. Import or create only what is needed and delete what is not of use.
> Windows PowerShell scripts that connect to the Data Access Service should be kept to a minimum. At least try to develop them in a way that loads as few objects as possible by using selection criteria for the Operations Manager cmdlets.
> Don’t over-use maintenance mode. If there is no way around it, reduce database grooming settings for state change events data.
> Targets for workflows should be as specific as possible. Use seed classes with lightweight discovery rules for custom application monitoring.
> Tune existing workflows using overrides. Disable unneeded workflows, adjust thresholds, set higher run intervals.

Prefer static groups instead of dynamic groups, or at least try to use lightweight criteria for your dynamic groups.
> Change the group calculation interval when there are many groups in the Operations Manager environment.
> Configure before customize. Determine if  an existing workflow would be enough instead of creating a new one.
> Classes, groups, modules, and so on should be in a sealed management pack so that they are not unexpectedly modified and so that they can be referenced by content in other management packs.

Thanks for reading!!!