نمایش نتایج: از شماره 1 تا 2 از مجموع 2

موضوع: Three Cheers for the Authoring Resource Kit Tools

  
  1. #1
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272

    Three Cheers for the Authoring Resource Kit Tools

    کد:
    http://blogs.msdn.com/steverac/archive/2009/12/30/three-cheers-for-the-authoring-resource-kit-tools-part-i-the-workflow-analyzer.aspx

    Part I - The Workflow Analyzer



    Have you ever been faced with troubleshooting OpsMgr and needed a way to trace the flow of a particular component – maybe a rule or monitor – or a discovery – to see what it was actually doing and what information was being submitted? Have you gotten frustrated with the ETL logs and the native OpsMgr events? Join the crowd!
    I’m not saying that ETL or the OpsMgr event logs are bad – quite the contrary…there is a great deal of information in both – and in a troubleshooting scenario it’s often helpful to take logs or traces from a healthy agent and compare them to a broken agent – thats troubleshooting 101. In addition, the ETL tracing and event logging abilities of OpsMgr have just gotten better as we’ve progressed from RTM to R2!
    But, what if we need that extra bit of deeper understand of whats happening ‘behind the scenes’? Enter the Authoring Resource Kit Tools – available here. The list of tools includes an MP Spell Checker, the cookdown analyzer, a workflow analyzer, a workflow simulator, a Visio add in for MP visualization, an MP difference tool, an updated Best Practice Analyser, etc.
    In this series of blog posts I’ll highlight three of the tools that are of particular interest to me – the workflow analyzer, the workflow simulator and the cookdown analyzer.
    Workflow Analyzer
    As mentioned above, ETL tracing is available in OpsMgr and can be used to solve/diagnose many issues – and the ETL traces can be quite detailed – which is also their challenge! Because these traces can be so busy and detailed it can take some time to get comfortable reading/interpreting them. Additionally challenging is the fact that different levels of tracing can be configured as well as different output formats. Wouldn’t it be cool if you could pick out the workflow of interest and focus tracing on that component only? The Workflow Analyzer does just that!
    Launching the Workflow Analyzer requires two inputs – the name of your RMS and the health service you want to analyze.

    Note that the analyzer can be run on the RMS itself – or it can be configured to analyze an agent workflow. If you run the trace tool on the RMS, start a new analysis session and choose the RMS health service, tracing starts immediately since the workflow of interest is running on the RMS. If you launch the Analyzer on the RMS and choose a remote health service then all of the configurations will be made to start tracing on the remote health service but to actually see the tracing output you will need to launch another instance of the Analyzer on the remote workstation and select to ‘connect to an existing Workflow Analysis’ as shown.

    When a new analysis session is started and the RMS/target health service are selected, a list of all of the workflows that the target health service knows about will be listed. Those that are running will be shown in green, those that are not running will be grey and those with a problem condition will be shown in red. The status column will give the state of each workflow. As you can imagine, showing an example of every conceivable workflow would be a big undertaking – but we will show a couple of samples to demonstrate the power of the analyzer.

    Discovery Workflow
    In the example above we select a discovery workflow. Right-clicking on the workflow gives two options – trace and analysis. The analysis option gives a detail screen with relevant information about the workflow and any configurations in place – such as overrides applied, the MP storing the workflow, etc.

    The Trace option begins a detailed trace of all actions taken by the chosen workflow. From the screenshot below note that before tracing begins an override has to be configured on the workflow to allow tracing to start. When tracing is selected on a workflow the override is introduced in a management pack called the WorkflowTraceOverrideMP (which only exists for the duration of the tracing) that can be seen in the Health Service State\Management Packs folder on the agent where tracing is taking place. If you catch the WorkflowTraceOverrideMP during tracing and open it you will see the simple XML introduced to apply the tracing override. I show a sample of the WorkflowTraceOverrideMP below. Note that if you open this MP directly the XML will likely not be formatted nicely. This is fairly easy to fix since the MP is so simple.

    The use of an override to initiate tracing is important to understand because before any testing can be traced the override must make it down to the target health service. This override will come down as part of a standard configuration update. If your environment is experiencing delayed configuration updates, getting the override down to the target health service may also be delayed. Make sure you see an event 1210 indicating that the new configuration has become active.

    Once the event 1210 has been received, begin reproducing the activity you wish to trace. In the case of our discovery workflow I introduced an override on it temporarily so that is runs every minute – which makes capturing the workflow activity much quicker. The trace output from the workflow is below.

    Note the detail received as the workflow runs – first we see a WMI query being executed along with the output from the query being sent as a dataitem. If we want to see a particular line of data in more detail, just double-click on it and the XML representation of the data will be displayed as shown.


    Event Workflow
    Another common example might be tracing a workflow that should match and act on an event. The trace output below is a simple example of the kind of tracing you would see from such a workflow. Note that as the trace proceeds and events are seen we specifically call out whether we consider each event a match or not.

    I’ll stop here – and I know there is a limited number of examples – but take some time to get familiar with this tool – it can really help understand how things are working inside of a workflow




    موضوعات مشابه:

  2. #2
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272
    کد:
    http://blogs.msdn.com/steverac/archive/2010/01/02/three-cheers-for-the-authoring-resource-kit-tools-part-ii-the-workflow-simulator.aspx
    Part II – The Workflow Simulator



    You’ve just built out several custom rules/monitors/discoveries – it’s late, you think you are almost done – just a bit of testing to go and….they don’t work. You look them over again and don’t see anything wrong – wouldn’t it be really cool if there was a way to see the ‘internals’ that are happening when the workflow runs? Introducing – the workflow simulator which is part of the Authoring Resource Kit Tools available here.
    Remember in MOM 2005 days when we had the ability to configure script tracing to actually watch the execution of a script running on an agent? You could also cause the script to open in a debugger to walk through execution line by line. The only caveat is that you had to have the script running through the agent. The workflow simulator will do all of the stuff MOM 2005 did and more. Essentially, any workflow that you have configured can be executed in the simulator without any need to have the workflow actually deployed to the agent. One requirement – you do have to have the agent installed on the system where you will be using the simulator so that the required binaries are present.
    So where is this simulator and how is it used? Once you have the authoring console resource kit tools installed the simulator is available in the authoring console itself. We will take a look at the simulator and walk through using it with three sample discoveries that in a custom BackITupNOW! management pack that I created when authoring the targeting chapter in the upcoming OpsMgr R2 Unleashed ebook. We will start with a simple registry discovery and use that to also talk about the configuration of the simulator and then move on to a WMI discovery and finally a simple script based discovery. For each example I will show a working discovery and then show the results with the same discovery that doesn’t return data.
    Workflow Simulator
    We start by opening the BackITupNOW! management pack in the authoring console and then navigating to the discoveries node and our three sample discoveries as shown.

    OK, so where is the simulator! It’s a bit hidden but if you simply right-click on any of the three discoveries you will note the simulate option in the displayed menu. If an authoring console element doesn’t support simulation then the option will appear grey.

    Selecting ‘Simulate’ launches the simulator tied to the specific workflow from which the simulator was launched – as shown.

    I’ve expanded several sections of the simulator to show the various configuration options. Let’s walk through the specific sections. The first section displays the name of the workflow and it’s target – no configuration to be done here – the fields won’t allow editing.

    Next, the Target Expressions options. There are a couple of settings we can tap into here. First, note whether there is a green check mark or a yellow exclamation mark here. If the yellow exclamation is seen that means some of the variables/values required by the workflow cannot be resolved and either need to be configured manually or, if the workflow in question has been imported to your management group, you can select to connect to the RMS and resolve the values.

    I resolved my expressions from my RMS. Doing so presents the dialog below allowing connection information to be specified.

    If no RMS is available to auto-resolve the variables then it’s easy enough to resolve them manually, either by typing in a value or allowing the simulator to auto-generate a GUID for fields that require them. Remember that this is a simulation – the results are accurate but the data doesn’t have to be accurate (such as with a GUID) – it just needs to be in the correct format and enough to allow required workflow have values that will work.

    The next field is the override values. The options here will vary depending on the workflow but for the simulation you might consider changing values such as frequency, etc. to allow the workflow to run more quickly or have a different timeout, etc.

    With all of the above configured you are almost ready to start the simulation. First though you need to decide whether to resolve any $MPElement/…$ expressions (I always leave this option selected) and whether to debug scripts. The debug script option only works when running a workflow that contains a script and will also only work if you have a script debugger registered. A good simple script debugger is the Microsoft Script Debugger – you can download it here and I will show it in action when we get to our script based discovery example.
    With these options configured, start the simulation. Once you get the simulation started and the first elements of the simulation appear we have a few additional options we can configure. If you right-click on a module you will see additional options. I tend to choose to enable tracing for the whole workflow which will launch the workflow analyzer when the simulation is running so you can see even more detail (I discuss the workflow analyzer in part I of this series – available here). You will also note that XML output is available for review from each of the running modules. By reviewing the XML output of the simulator and the workflow analyzer together you can generally put together whether the workflow is running as expected or not and the reasons why.

    Registry Discovery - Good
    As mentioned earlier, i will show both a good and a bad simulation for each of my three discovery workflows. Let’s look at the good simulation for my registry discovery. First, let’s take a look at the configuration of the registry discovery. As shown below, we are looking for 4 registry values – Device, GroupName, InstallDate and InstallDirectory. We are specifically trying to find systems that have the following values for these entries
    Device – Tape
    GroupName – Group
    InstallDate – 09
    Installdirectory – c:\backITupNOW
    If the discovery doesn’t find these values it will not return a match.



    Running this workflow through the simulator we get the output below. the first module, the schedule doesn’t tell us much except which healthserviceid we are operating against and the time when the schedule fired – which cold be useful if trying to diagnose a workflow that is not operating on time.

    The probe module shows the attempt to read the registry and the values it found. This module maps to the registry probe configuration settings configured on the discovery.

    The same XML data is seen for the filter module but here we are evaluating to ensure we have a match – this section maps to the expression settings configured on the discovery.

    Finally, the mapper module pulls it all together and takes the discovered data, which passed our filter, and submits it as discovery data. The screens below show the total XML and are modified a bit to get as much of the XML in the display as possible.


    So the simulator has given us a great deal of good information. Now add the data from the workflow analyzer and the detail is even richer. I won’t add much since the data speaks for itself but note that for each registry value we can see what is determined to be a match.


    Registry Discovery - Bad
    So that was a working discovery – now lets change just a single value in the registry to other than what is expected and see the difference. Notice I just changed the install dir from c:\ to d:\

    Run the simulator again – notice the probe information is the same but the filter does not generate any data since we don’t have a match and there is no discovery data returned in the mapper.



    And from the workflow analyzer we can see the mismatch and all subsequent attempts to match stop.

    WMI Discovery – Good
    We’ve seen the registry discovery – what about a WMI discovery. Here is the detail of what happens in the simulator and workflow analyzer.
    The discovery will only match if the countrycode value is equal to 1. This likely is not a value you would use in a real discovery but it allows an easy demonstration of a good vs. bad WMI discovery.

    In the analyzer we only have three modules that return data – the first is our scheduler module.

    The probe module actually reads the WMI class and shows all of the associated data. Note that country code does equal 1.



    The mapper module shows the discovery data being submitted.

    The same results in the workflow analyzer are not as detailed as for the registry discovery but one key item we can see is that one data item is listed as returned as part of the workflow. That means discovery was successful and data was submitted.

    WMI Discovery – Bad
    Now that we’ve seen a sample of a good WMI discovery let’s run one that won’t return discovery data. To do that I simply change the country code to a value of 2 so I will get no match.

    I still have the same three modules that display but note for the probe that I have no data returned and for the mapper I return no class instance for my discovery.


    I also see in my analyzer that there are no data items returned meaning the discovery was not successful. Actually, a word on that for a minute. In the registry example and in this example – and in the next example, I refer to discoveries not being successful. Thats not really true – the discovery is always successful meaning that it does run and look to see if the system matches the discovery criteria – if it does we return a data item and if it doesn’t we return nothing. So the discovery does, in fact, work – but just returns no data. Just wanted to clear up that potential confusion.

    Script Discovery – Good
    We’ve seen registry and WMI discoveries – now let’s look at a script discovery. Notice the yellow highlight in the script. When the script runs it will specifically look for a FOLDER called flagfile.txt. if it doesn’t find a folder by this name, the script simply exits.

    Running this through the analyzer we see two modules – our familiar scheduler module and the script module. You can see in the XML for the script module that the script does run and does return data meaning the discovery was successful.


    Looking in the analyzer we can get even more useful information – such as the command line used for the script, the XML blog containing discovery information that is submitted, etc.

    Script Discovery – Bad (with script debug enabled)
    To give an example of a discovery that doesn’t submit data, I simply delete the flagfile.txt folder and rerun the simulation. Note that this time I selected the option to debug the script. This is very useful if you are seeing problems with the script where you expect data to be returned but it isn’t, etc. In the simulator I see my scheduler module but there is no script module – since nothing is returned from the script no data is submitted.

    My script does attempt to run because my debugger pops up. I trace the script execution to the highlighted line and then the script simply exits. Why? Because there is no folder named flagfile.txt.

    Looking at the analyzer I see that a script error is encountered showing data loss but no message is displayed. This may be misleading since no error really occurred – my script simply exited because a condition wasn’t met.

    And there you have it – a brief walk through of the workflow simulator. I find this to be an immensely useful tool. In the examples we used discoveries from a custom and unsealed management pack – but the simulator works just fine with workflows from sealed management packs too. Note also that there are some limitations to the simulator so be sure to check out the help file documentation and review them




کلمات کلیدی در جستجوها:

scom workflow analyzer debug tracing

برچسب برای این موضوع

مجوز های ارسال و ویرایش

  • شما نمی توانید موضوع جدید ارسال کنید
  • شما نمی توانید به پست ها پاسخ دهید
  • شما نمی توانید فایل پیوست ضمیمه کنید
  • شما نمی توانید پست های خود را ویرایش کنید
  •