Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Use PowerShell to Choose Unique Objects from a Sorted List

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to select unique objects and properties from a sorted list.

Microsoft Scripting Guy, Ed Wilson, is here. I have been busy working on the study guide for the 2012 Scripting Games. It will look similar to the 2011 Scripting Games Guide. If you are thinking about the 2012 Scripting Games, and you really should, you would do well to review the content I created for the 2011 Scripting Games Guide. Also think about the following help to ramp up for this year’s games:

The Scripting Games are the best opportunity to learn Windows PowerShell.

Using the Get-Unique cmdlet

Anyway, as part of my work for the 2012 Scripting Games, I have been looking over cmdlets, examining the Help files, and in general reviewing the Windows PowerShell fundamentals. It dawned on me that I have not written very much about the Get-Unique cmdlet.

It is important to keep in mind how the Get-Unique cmdlet works, or else it can return misleading results. For example, the following code that returns the number of unique processes running on a computer appears to work just fine.

Get-Process | Get-Unique | Measure-Object

Sort the list of objects

Unfortunately, there is no guarantee that the results will be accurate. This is because the Get-Unique cmdlet compares each item in a sorted list to the next item to eliminate duplicates. Without sorting the list, the results are not necessarily going to be accurate.

There is another issue to consider: How do I determine what is unique in dealing with processes? Do I want unique process names, unique process IDs, unique memory consumption, unique executable paths, or what? The System.Diagnostics.Process .NET Framework object contains 51 properties—any one of which one could use to define “unique.” Without specifics, one cannot be certain what unique characters will return for the object.

An additional aspect of uniqueness to consider arises because of the object-oriented nature of Windows PowerShell. The Get-Unique cmdlet returns unique objects, or unique strings. Because everything in Windows PowerShell is an object, this can become an extremely short list. The following code returns only one unique object from a listing of processes on the local computer.

Get-Process | Get-Unique -OnType | Measure-Object

Specify the property upon which to sort

When examining a specific property of an object and treating the property as a string, the Get-Unique cmdlet determines the property upon which to operate based on the property that is specified to the Sort-Object cmdlet. The following code counts the number of unique processes, based on process name.

Get-Process | sort-object name | Get-Unique -asstring | Measure-Object

To identify unique processes based on the process ID, use the code that follows.

Get-Process | sort-object id | Get-Unique -asstring | Measure-Object

The commands to derive unique processes based on object type, process name, and process ID, along with the output associated with each command, appear in the image that follows.

Image of command output

A shortcut to uniqueness

In most cases, it is a unique property and not a unique object that is the object of a query. Because the Get-Unique cmdlet requires sorting objects prior to piping, it is possible to bypass use of the Get-Unique cmdlet by using the Unique switched parameter from the Sort-Object cmdlet. Here is an example of using the Unique parameter from the Sort-Object cmdlet.

get-service | Sort-Object status -Unique | measure

If you use the Select-Object cmdlet, it might be easier to use the Unique parameter from that cmdlet. The code to do this is shown here.

get-service | Select-object status -unique | measure

What about case sensitivity?

Nearly everything in Windows PowerShell defaults to case insensitive, therefore it might come as a surprise that the Get-Unique cmdlet and the Select-Object cmdlet are case sensitive in determining uniqueness. The Sort-Object cmdlet is not case sensitive when choosing unique objects from the list.

In the code that follows, I create an array of strings with a mixture of upper case and lowercase items in the array. I then pipe the strings to the Sort-Object cmdlet prior to piping the results to the Get-Unique cmdlet to sort the strings. Next, I pipe the strings to the Sort-Object and use the Unique parameter to obtain uniqueness. Finally, I pipe the strings to the Select-Object cmdlet, and I use the Unique parameter there also.

$a = "one","two", "Two", "three", "Three"

$a | sort-object | Get-Unique –AsString

$a | sort-object –Unique

$a | Select-Object -Unique

The code to create the array of strings and select unique strings from the array, and the associated output are shown in the image that follows.

Image of command output

Well, that is about all there is to selecting unique objects from a list. There are three ways to do this: use the Get-Unique cmdlet, use the Unique parameter from the Sort-Object cmdlet, or use the Unique parameter from the Select-Object cmdlet.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Introduction to WSUS and PowerShell

$
0
0

Summary: Guest blogger, Boe Prox, shows how to use Windows PowerShell to install WSUS and conimage clients for updates.

Microsoft Scripting Guy, Ed Wilson, is here. You are in for a treat this week. Boe Prox has written a week’s worth of goodies, and we will share them here.

Photo of Boe Prox

Boe Prox is currently a senior systems administrator with BAE Systems. He has been in the IT industry since 2003, and he has been working with Windows PowerShell since 2009. Boe looks to script whatever he can, whenever he can. He is also a moderator on the Hey, Scripting Guy! Forum. Check out his current projects published on CodePlex: PoshWSUS and PoshPAIG.
Boe’s blog: Learn PowerShell | Achieve More

Now, without further ado, here is Boe!

For those of you who are unfamiliar with Windows Software Update Services (WSUS), I am going to start with a brief description of what WSUS is and how it is used to manage patching in an environment. Then I will dive into installing the server by using Windows PowerShell, configuring clients via GPO or the registry to report to the WSUS server and to receive the updates from the server. Lastly, I will discuss how to use Windows PowerShell with the associated assemblies for the WSUS Administrator Console to connect to the WSUS Server.

What is WSUS?

Windows Software Update Services (WSUS) is used by system administrators to manage the distribution of updates and hotfixes that are released by Microsoft for an environment. Currently, the most recent version is WSUS 3.0 with Service Pack 2, and it is available to download. This installation allows you to install the full server installation option or only the console installation (which can be installed on any client or server). It also has the assemblies required to use Windows PowerShell to manage the WSUS server. To determine the version of WSUS, refer to Appendix G: Detect the Version of WSUS on Microsoft TechNet.

Installing a WSUS Server

Before you begin the installation of WSUS, make sure you install the following on the selected server:

  • Internet Information Services (IIS)
  • At a minimum, .NET Framework 2.0

You could install WSUS 3.0 with SP2 on your server by using the file specified in the previous download link and running through the UI installation. But c’mon, this is a scripting blog, so surely we can script something out…right? Right!

Because there are several switches available with the executable file, we can easily script an unattended installation of the WSUS server along with specific configurations that meet our requirements. For more information about those switches, see Appendix A: Unattended Installations. The script I wrote that will allow a local or remote installation of a WSUS server or to install only the WSUS Administration Console is called Install-WSUSServer.ps1.

Note: I chose to require a dependency for PSExec.exe to complete the remote installation requirements. There were a number of reasons why I chose to do this, but the main reason is that I wanted to make sure that if an issue occurs that cancels the installation, an error would returned in the script that lets you know something happened.

You can run this script against only one computer at a time. This is because only one SUSDB database can be used for each SQL Server instance (unless you are setting up a WSUS server as a front-end server). You can specify whether to install a console or a server core installation, and it allows for using the internal database installation or a local/remote SQL Server instance to write data to. Lastly, if the required installation file (WSUS30-KB972455-x86.exe or WSUS30-KB972455-x64.exe) is not in the same directory as the script, you will be given a prompt to download the required file to use for installation. I also wrote an Uninstall-WSUSServer.ps1 script that performs exactly like the Install-WSUSServer.ps1 script.

So enough talking about this script and let’s see it in action. First, let’s install our WSUS server and make sure that we save the content locally and save that content on the D drive. I am also going to make sure that I do not have the updates made available so it asks me to download the update prior to the installation.

. .\Install-WSUSServer.ps1 -Computername DC1 -StoreUpdatesLocally -ContentDirectory "D:\WSUS" -InternalDatabasePath "D:\" –CreateDatabase –Verbose

The command and the associated output are shown in the following image.

Image of command output

The following image shows the newly created folders.

Image of folders

Now let’s fire up the console. To do this, I make an RDP connection to my DC1 server as shown in the following image.

Image of menu

Yep, it’s all there! OK, now let’s install the Administration Console on my laptop so I can make use of the assemblies for Windows PowerShell integration. Here is the command line that I use to install only the console.

. .\Install-WSUSServer.ps1 –ConsoleOnly -verbose

The following image shows the command that I use to install the console, and the output from the Install-WSUSServer.ps1 script.

Image of command output

This was a little simpler because I already had the installation file that I needed to complete the Administration Console installation.

Conimage clients to use WSUS

OK, we have our WSUS server installed and running, and we have our Administration Console installed on the laptop. The next step is to conimage my laptop and the server to communicate to the WSUS application so we can image the patches that are required for each client. There are a couple of ways to accomplish this task. If the client is in an Active Directory environment, all you have to is use Group Policy to make the configuration changes. 

Note: For non-domain systems, you can make this change by using gpedit.msc.

For the sake of simplicity, I am just going to conimage the GPO to point my client to the WSUS server (Specify intranet Microsoft update service location), set the update client to only download (Conimage Automatic Updates) the approved updates, and change the detection to every four hours (Automatic Updates detection frequency). The changes are reflected in the following image.

Image of menu

Another way is to do some registry hacking to make the changes, which leads me to my next set of Windows PowerShell scripts that accomplish this task. The first script allows you to view the settings in the registry and the other script allows you to change the registry settings to conimage the client. You can find these scripts in the Script Center Repository:

For more information about the registry information that I used in these scripts, see Configure Automatic Updates in a Non–Active Directory Environment.

First, let’s see the settings that we will be changing to talk to our WSUS server on DC1. To do this, I use the Get-ClientWSUSSetting command as shown here.

Get-ClientWSUSSetting | Select WUServer,WUStatusServer,DetectionFrequency,AUOptions

The following image shows the output from this command.

Image of command output

Now let’s change these to reflect the same GPO we conimaged for the domain. Here is the command that does that.

Set-ClientWSUSSetting -Options DownloadOnly -DetectionFrequency 4 -UpdateServer 'http://DC1' -Verbose

The image shown here illustrates the output from the command.

Image of command output

Now, I use the Get-ClientWSUSSetting command to verify that the changes worked properly. As shown in the following image, it worked just fine.

Image of command output

Perfect. Now our clients are talking to the WSUS server, and they will begin reporting what patches are needed to be compliant.

WSUS and Windows PowerShell

Now we are at the point where we can begin working with the assemblies by using Windows PowerShell, and make our initial connection to the WSUS server. We want to use the Microsoft.UpdateServices.Administration assembly. Here is the command that I use to load that assembly.

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | Out-Null

Now it is time to make the initial WSUS connection by using the .NET accelerator [Microsoft.UpdateServices.Administration.AdminProxy] and the static method getUpdateServer(), which requires two values: the WSUS server name and a Boolean value for a secure connection. Because I am not using a secure connection for the WSUS server, we will set it for False. Here is the command.

$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer(‘DC1’,$False)

$wsus

The output from the command is shown in the following image.

Image of command output

Here you can see some information regarding the remote WSUS server, such as the port number that is being used and the version of the WSUS server.

That is all for today with WSUS. I have shown the beginnings of using Windows PowerShell to make the initial connection to the WSUS server. Tomorrow I will expand on this by showing how you can use Windows PowerShell to perform some basic administration for your WSUS server.

~Boe

Thanks Boe, that is a great introduction to WSUS. WSUS Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Perform Basic Administrative Tasks on WSUS

$
0
0

Summary: Learn how to use Windows PowerShell to automate basic administrative tasks on a WSUS server.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have the second blog post by Boe Prox about WSUS and Windows PowerShell. See yesterday’s blog for the introduction to WSUS and to learn more about Boe.

Take it away, Boe…

In yesterday’s blog, I showed you how to install a WSUS server and configure your clients via Group Policy and/or registry modifications, and I briefly introduced you to the steps taken to make your first connection to a WSUS server by using Windows PowerShell and the assemblies that are available after you install the Administration Console.

Today, we start looking at some of the basic administration steps for managing your WSUS server. Some of the steps I plan to discuss work with clients and target groups.

To step back one day, here are the steps for making your initial connection to the server. Note that you must have the WSUS Administration Console installed to access the required assemblies. (I covered installing the WSUS Administration Console in yesterday’s blog).

[void][reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")

$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer(“dc1”,$False)

$wsus

Image of command output

OK, we now have a connection to the WSUS server and we are ready to dive into some Windows PowerShell fun!

The first thing I do, is see what I have available to me in the form of methods on this object.

$wsus | Get-Member –Type Method

The output is shown in the following image.

Image of command output

Yes, that is quite a number of available methods to use. Depending on what methods you use, they will return objects that have more methods within them! Therefore, there are many possibilities for exploration within your WSUS server from which to pull information.

Finding WSUS clients

You can quickly query and identify one or all of the clients that are currently using the WSUS server with a couple of commands. Each query returns a Microsoft.UpdateServices.Internal.BaseApi.ComputerTarget object that you can use for other commands within the WSUS server, and it has its own set of methods that you can utilize. Before we dig into those, let us look at the possible commands that we can use to locate a client in WSUS.

GetComputerTargetByName()

$wsus.GetComputerTargetByName("dc1.rivendell.com")

As you can see in the following image, this command requires the full domain name of the system that you are going to query. Anything less than the full name will result in the command throwing an error (also shown in the following image).

Image of command output

SearchComputerTargets()

$wsus.SearchComputerTargets("DC1")

This command allows you to specify a shorter name to query the client on the WSUS server. This can also be used for performing a wildcard search for clients, such as in the following example.

$wsus.SearchComputerTargets("D")

Image of command output

As you can see, I can pull up all clients that start with the letter D.

GetComputerTarget()

This method is a little tougher to use in that you have to know the client ID, which you probably won’t be able to locate unless you use on of the methods listed previously to query for a client. In the example that follows, I already located a client ID from a previous query.

$wsus.GetComputerTarget([guid]” db683962-0b62-4dd2-be0d-b535c25cf7f7”)

Image of command output

GetComputerTargets()

This method has a couple of uses that are great to use for searching clients. The first way to use is method is to call it as it is. This will perform a complete dump of all of the clients in WSUS.

$wsus.GetComputerTargets()

Image of command output

The second way to use this method is to supply a computer scope object to use for the query, which is my favorite way to look for clients. This example will have to wait a couple more days when I discuss working with the computer scope object for reporting.

So with that, we have covered a few different ways to query the WSUS server for clients, and now let’s look at a couple of useful methods within the computer object that can help us with our administration.

An important computer object gotcha!

There is one little thing that I have yet to mention about the computer object that you get from each of these queries. Even if you return only one object from your query (without using Select-Object –First 1 or anything else along those lines), that object will actually be part of a collection (Microsoft.UpdateServices.Administration.ComputerTargetCollection) that you will have to work with to actually be able to use that object’s methods.

$Client = $wsus.SearchComputerTargets("DC1")

$Client.GetType().ToString()

Image of command output

To use the methods, you need to step through each member of the collection—or in the case of only one member of the collection being returned, by slicing into the array as shown in the following example.

$client[0].GetType().ToString()

Image of command output

Now that is more like it! Now we can start playing with some of the fun methods of this object.

Getting a client’s target group membership

This method is useful for pulling all of the groups that a client is a member of:

GetComputerTargetGroups()

$Client = $wsus.SearchComputerTargets("DC1")

$client[0].GetComputerTargetGroups()

Image of command output

Removing a client from the WSUS server

This method is pretty self-explanatory:

Delete()

It will delete the current computer object that is calling this method. This is a great way to remove stale computers that are off the domain or have not checked in for an extended period for other reasons.

$client[0].Delete()

There are two more methods that are great to use; however, I am going to leave those hanging until later in the week because they require the Update Scope object, which I will go into more detail about then. The two methods that I am talking about are:

GetUpdateInstallationInfoPerUpdate()

GetUpdateInstallationSummary()

So stay tuned until later in the week and I will show you what you can do with these methods!

Target groups

Next on the list are target groups in WSUS and how we can use Windows PowerShell to manage these. Luckily, there are only two methods that we can use to query for the target groups. Each of these methods used will return the following type of Target Group object: Microsoft.UpdateServices.Internal.BaseApi.ComputerTargetGroup

The first method to look at is:

GetComputerTargetGroups()

This method returns all the target groups that exist on the WSUS server. It is an all-or-nothing approach, and you can then filter that collection for the group you are looking for.

$wsus.GetComputerTargetGroups()

Image of command output

As you can see, I only have two groups at this time that also happen to be the default groups you get with the WSUS installation. I used this one first so I can transition more easily into the next method. This works great, but it has a small prerequisite: You have to know the ID of the group.

GetComputerTargetGroup()

As I mentioned, this method requires the ID of the WSUS target group. It is not the most useful method if you have no idea what that group ID is without first using the GetComputerTargetGroups() method.

$wsus.GetComputerTargetGroup([guid]” a0a08746-4dbe-4a37-9adf-9e7652c0b421”)

Image of command output

Creating a new target group

We now have a couple of ways to get our target groups from WSUS, but it would be great if we could also create our own groups. This can be done very easily by using the following method from the WSUS server object that we created with our initial connection:

CreateComputerTargetGroup()

We have a couple of options with this method that we can use to create a target group. The first way is to create a group at the root of the target groups and the second allows you to create a child group underneath a parent. First, let’s create a root group:

$wsus.CreateComputerTargetGroup(“TESTGROUP”)

Image of command output

An object is immediately returned when you create the new group.

Now I will create a child group underneath this group. The first thing that I need to do is get the group object of the parent.

$group = $wsus.GetComputerTargetGroups() | Where {$_.Name –eq “TESTGROUP”}

$wsus.CreateComputerTargetGroup(“CHILDGROUP”,$group)

$wsus.GetComputerTargetGroups()

Image of command output

And just like that, I have my child group. OK, the console does not really show you that this is truly a child group, so here it is in GUI form.

Image of GUI

Removing a target group

Removing a target group is pretty simple. All you have to do is call the Delete method from the group object.

Delete()

$group = $wsus.GetComputerTargetGroups() | Where {$_.Name –eq “TESTGROUP”}

$group.Delete()

Adding a client to a target group

Adding a client to a group is also very easy, you just need to have the computer object for the client that you want to add to the group. The method that you can use is:

AddComputerTarget()

$Client = $wsus.SearchComputerTargets("DC1")

$group = $wsus.GetComputerTargetGroups() | Where {$_.Name –eq “TESTGROUP”}

$group.AddComputerTarget($client[0])

Remember, the GetComputerTargetGroups() I showed you earlier? Let’s go ahead and run that now to see DC1’s memberships.

$client[0].GetComputerTargetGroups()

Image of command output

Removing a client from a target group

Removing a client from a group works very much like adding a client to the group. The computer object is again required for the method that will remove the client. The method for the client removal is:

RemoveComputerTarget()

$Client = $wsus.SearchComputerTargets("DC1")

$group = $wsus.GetComputerTargetGroups() | Where {$_.Name –eq “TESTGROUP”}

$group.RemoveComputerTarget($client[0])

And let’s verify that DC1 is no longer in the TESTGROUP.

$client[0].GetComputerTargetGroups()

Image of command output

All gone! Because it has no other group memberships at this time, DC1 gets put back into the “Unassigned Computers” group.

This really is only a handful of possible methods that are available for the client and target group objects, but these are some of the more important methods that you should know about for working with the WSUS server.

That wraps up today’s introduction to administering a WSUS server by using Windows PowerShell. Tomorrow I will continue talking about some basic administration techniques that you can use for your WSUS server by using Windows PowerShell.

~Boe

Boe, thank you for your blog. It rocks.

WSUS Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Approve or Decline WSUS Updates by Using PowerShell

$
0
0

Summary: Guest blogger, Boe Prox, shows how to use Windows PowerShell to approve or to decline updates for WSUS.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome to the third day of Boe Prox as our guest blogger talking about using Windows PowerShell with WSUS.

Day 1: Introduction to WSUS and PowerShell

Day 2: Use PowerShell to Perform Basic Administrative Tasks on WSUS

Here’s Boe…

Continuing on from yesterday’s post where we looked at managing clients and groups, it is time to look at that one thing that really defines the WSUS server. I am talking about updates, of course! We will start by performing a basic query to locate updates and explore the update object to see what methods we have available, and then we will go into approving and declining updates.

Update queries

Using our existing connection from the previous day’s examples, let’s look at the possible methods related to locating updates on the WSUS server.

GetUpdates()

This method will give you back EVERY SINGLE UPDATE on your WSUS server. (This is assuming that you have completed a successful synchronization—if not, synchronize your WSUS server now.) This method is painfully slow to run and it is not a recommended method to gather information about updates.

$wsus.GetUpdates()

Image of command output

…Well, this is awkward. Oh wait, since I just installed this WSUS server, I have not performed a critical task that will allow me to start viewing updates. I need to synchronize this WSUS server with the Microsoft upstream server and get all of my update data.

WSUS synchronization

Let’s take a detour on our little trip through working with updates to get this server synced up and ready to go. To get this going, we need to look at the GetSubscription() method to see when the last synchronization was and also to use that object’s method to work with the synchronization. The object that is returned when you use this method is Microsoft.UpdateServices.Internal.BaseApi.Subscription.

$wsus.GetSubscription()

Image of command output

Take a look at LastSynchronizationTime, and you will see that this WSUS server has not been synced yet. Synching this server is pretty simple. We will use this object’s method, which is conveniently named StartSynchronization(). Let’s go ahead and kick this off.

$subscription = $wsus.GetSubscription()

$subscription.StartSynchronization()

Nothing is returned when you kick this off, so we need a way to track this process and find out when it is finished. Enter the GetSynchronizationProgress() method, which will tell us exactly where we are with the sync process.

$subscription.GetSynchronizationProgress()

Image of command output

It is slowly getting there. I will check back shortly and see if it has completed yet.

Image of command output

There we go. Now we are ready to jump back into working with updates.

Let’s try this again…

$updates = $wsus.GetUpdates()

Image of command output

That’s more like it. Now I have over 5500 updates to work with.

As you can see, calling GetUpdates() will get every single update on the server. But, what if we only want to get a specific update or updates? Instead of getting all of the updates and using Where-Object to filter them, you can use one of the following methods to accomplish that goal. (By the way, there is one more thing that we can use with GetUpdates() to get specific updates, but I am leaving that piece out until later in the week.)

$wsus.GetUpdate()

This method is quick to get the update that you are looking for. The only issue is that this method requires you to know the update ID, which is a GUID. Not exactly something a person will know off of the top of one’s head, but it is an option regardless.

$wsus.GetUpdate([guid]” fc08b450-6bdd-400e-9a1a-2f86e23ce462”)

Image of command output

Now, on to the last way to get more specific updates—this time using the following method:

SearchUpdates()

This method is very flexible to use because it only requires a string for input. This string can be anything from the KB number for the patch, the actual patch name, or even something as simple as the type of system that the patch would be installed on (such as Exchange Server). Let’s look at a few examples of performing some searches:

$SQL = $wsus.SearchUpdates(‘SQL’)

$SQL.count

$SQL | Select Title

Image of command output

$update = $wsus.SearchUpdates('943485')

$update.count

$update | Select Title

Image of command output

$patches = $wsus.SearchUpdates(‘Windows 7’)

$patches.count

$patches | Select Title

Image of command output

Now that we know how to find updates on the WSUS server, we can start to explore the update object (Microsoft.UpdateServices.Internal.BaseApi.Update).

There quite a few methods that are available for the update object, so we will focus on three methods that are pretty useful for administering updates on the server.

Accepting license agreements

You might be asking me why accepting license agreements would be one of the three things that I would want to focus on. Well, this is not an issue…until it becomes an issue when you are trying to approve an update. There is a property that has a Boolean value called RequireLicenseAgreementAcceptance that we can use in our filtering to locate any updates that require a license acceptance before you can approve the updates.

$license = $updates | Where {$_.RequiresLicenseAgreementAcceptance}

$license | Select Title

We can use the method AcceptLicenseAgreement() to accept the agreement for the update.

$license | ForEach {$_.AcceptLicenseAgreement()}

With that, all of the updates that had a requirement to accept a license agreement have been taken care of, allowing us to start approving some updates!

Approving updates

There are three ways that you can approve updates on the WSUS server by using PowerShell—one by using ApproveForOptionalInstall() and two by using Approve(). They require that we use the TargetGroup object to tell the server what group we will approve each update for. Yesterday, I talked about how to get a target group; now we can easily grab a group to use for each type of approval. The Approve method also requires that you pick an installation action (I will go into more detail when we get to that method).

The main difference between ApproveForOptionalInstall and Approve is that ApproveForOptionalInstall will approve the update for the target group, but the update will not actually install. However, it is made available to the user for installation via Add/Remove Programs. Approve will approve the update with or without a deadline (your choice—I will explain later how to do this).

Let’s start with the optional installation first…

$update = $wsus.SearchUpdates('Windows 7 for x64-based Systems (KB2639417)')

$group = $wsus.GetComputerTargetGroups() | where {$_.Name -eq 'TESTGROUP'}

$update[0].ApproveForOptionalInstall($Group)

Image of command output

Here I approved a Windows 7 update for my TESTGROUP in which my laptop is currently a member. Notice that it returns another object called Microsoft.UpdateServices.Internal.BaseApi.UpdateApproval. This object tells you when it was approved, the time that the update will “go live,” and who approved the update. It is important to note that the update object you get is a collection, even if you only get one item back. Therefore, I had to be sure to use the first index of the collection to use this method for approval.

By using Approve() method, we have two ways in which to approve the updates. Both require that you have the target group object, but one also has the requirement of setting a deadline by using the [datetime] object.

Do you remember when I mentioned using an installation action for the approval? Well, now it is time to use one of those actions for this approval. But how do we know what installation actions we can use? We will find out what our options are by using Microsoft.UpdateServices.Administration.UpdateApprovalAction.

[Microsoft.UpdateServices.Administration.UpdateApprovalAction] | gm -static -Type Property | Select –expand Name

The installation actions that are returned are:

All

Install

NotApproved

Uninstall

Now that we know what kinds of actions we can use, let’s make an approval without using a deadline.

$update = $wsus.SearchUpdates(‘Update for 2007 Microsoft Office System (KB932080)')

$group = $wsus.GetComputerTargetGroups() | where {$_.Name -eq 'TESTGROUP'}

$update[0].Approve(“Install”,$Group)

Image of command output

Again, the same object type is returned with the same data that we would expect.

We can set a deadline on our next update for approval. Basically, setting a deadline on a patch will force that update to be installed at that specific time. For instance, I will approve an update that will get installed on Dec. 15 at 11:00 PM.

$update = $wsus.SearchUpdates(‘932080’)

$group = $wsus.GetComputerTargetGroups() | where {$_.Name -eq 'TESTGROUP'}

$update[0].Approve(“Install”,$Group,[datetime]”12/15/2011 11:00PM”)

Image of command output

Notice that the Deadline is set to the time that I specified instead of it being way out in the year 9999. Now the update will install on the server at that specific time without manual intervention.

You might be asking, “How were you able to determine what updates were required by your systems?” Well, besides jumping into the WSUS Administration Console to view those updates, there is another way that I can locate updates, but that will have to wait until later this week when I introduce another object and way to search for updates.

Declining an update

So we have approved updates, but we also need to know how to decline updates that are not needed by any system or are deemed unneeded by us for various reasons. Luckily, declining an update is as simple as using the Decline() method.

$update = $wsus.SearchUpdates(‘932080’)

$update[0].Decline()

Unlike the approvals, no object is returned when you decline an update. Regardless, this is a simple and effective way to decline updates that you do not need.

That wraps it up for today’s blog focusing on update management by using Windows PowerShell. Although I really only scratched the surface of working with updates, I hope I was able to give you enough information to make the leap into patch management using Windows PowerShell. Tomorrow I will dive into working with the computer scope object and how you can use that object with some of the objects we have seen this week and some of their associated methods that only accept the computer scope object.

~Boe

Boe, thank you once again for a very useful and informative blog about using Windows PowerShell and WSUS. WSUS Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Find Missing Updates on WSUS Client Computers

$
0
0

Summary: Learn how to use the computer target scope with Windows PowerShell to find WSUS client computers that are missing updates.

Microsoft Scripting Guy, Ed Wilson, is here. We are halfway through WSUS Week and some really good stuff from Boe Prox. You can see Boe’s biography in the Day 1 blog. In case you need to catch up with the series on Windows Software Update Services (WSUS), the following blogs will get you up to speed:

Day 1: Introduction to WSUS and PowerShell

Day 2: Use PowerShell to Perform Basic Administrative Tasks on WSUS

Day 3: Approve or Decline WSUS Updates by Using PowerShell

Now, here is Boe…

Over the past few days, I have been showing you some basic administration examples of what you can do by using Windows PowerShell to manage a WSUS server. During the course of some of these examples, I alluded to some methods that required a Computer Target Scope object. Well, today is the day that I get to show you how you can build a Computer Target Scope object and use that with some of the existing objects we have already created.

“What is the Computer Target Scope?” you ask? The Computer Target Scope is, as the name implies, a scope that that can be used to filter a list of clients. For more information about this class, see ComputerTargetScope Class on MSDN.

Creating the Computer Target Scope object

The first thing we need to do is create the scope object, which we can then use in some of the methods we have seen. To do this will require us to create the new object from the Microsoft.UpdateServices.Administration.ComputerTargetScope class.

$computerscope = New-Object Microsoft.UpdateServices.Administration.ComputerTargetScope

Simple enough to do, and we can choose to leave the default properties that are already set for this object if we want to.

Image of command output

So as a default object, this will reference all clients on the WSUS server. Fortunately, you can edit most of these properties to narrow down the clients that you want to use. Here is a list of the editable properties:

ExcludedInstallationStates

 

Gets or sets the installation states to exclude.

FromLastReportedStatusTime

 

Gets or sets the earliest reported status time.

FromLastSyncTime

 

Gets or sets the earliest last synchronization time to search for.

IncludedInstallationStates

 

Gets or sets the update installation states to search for.

IncludeDownstreamComputerTargets

 

Gets or sets whether or not clients of a downstream server, not clients of this server, should be included.

IncludeSubgroups

 

Gets or sets whether the ComputerTargetGroups property should include descendant groups.

NameIncludes

 

Gets or sets a name to search for.

OSFamily

 

Gets or sets the operating system family for which to search.

ToLastReportedStatusTime

 

Gets or sets the latest last reported status time to search for.

ToLastSyncTime

 

Gets or sets the latest last synchronization time to search for.

Working with the Computer Target Scope

Now, if you remember from the previous posts, there are some methods from the Update object (Microsoft.UpdateServices.Internal.BaseApi.Update) and from the WSUS object (Microsoft.UpdateServices.Internal.BaseApi.UpdateServer) that require a computer scope object to work.

Let’s take a look at some of these methods from the respective objects and see what we are able to pull.

[void][reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")

#Connect to the WSUS Server and create the wsus object

$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::getUpdateServer('dc1',$False)

#Create a computer scope object

$computerscope = New-Object Microsoft.UpdateServices.Administration.ComputerTargetScope

#Find all clients using the computer target scope

$wsus.GetComputerTargets($computerscope)

Image of command output

By leaving the default properties, we will pull every client from the WSUS server.

Another method that we can use, which provides a report on the computers that match the scope filter is GetComputerStatus(). This method requires that you have a computer scope object and that you define a Microsoft.UpdateServices.Administration.UpdateSources object, which consists of All, Microsoft Update, or Other. Let’s give it a look.

$Wsus.GetComputerStatus($computerscope,[ Microsoft.UpdateServices.Administration.UpdateSources]::All)

Image of command output

You might be wondering why we are seeing mostly 0s here. If you look closely, you can see that only the areas where it mentions the computer targets are populated. This is by design for this object’s method because we only specified to get the status of the computers that match the scope filter.

Now, looking at the update object’s method, I can see a couple of methods that are available and require the computer scope.

GetSummary()

This will give us a summary of whether the clients that are specified in the scope require the update.

$updates = $wsus.SearchUpdates('Update for Windows Server 2003 (KB938759)')

$update = $updates[0]

$update.GetSummary($computerscope)

 

UnknownCount                : 0

NotApplicableCount          : 2

NotInstalledCount           : 0

DownloadedCount             : 1

InstalledCount              : 0

InstalledPendingRebootCount : 0

FailedCount                 : 0

IsSummedAcrossAllUpdates    : False

UpdateId                    : c2cdd066-7a03-4e7f-976c-139b5de943ed

ComputerTargetGroupId       : 00000000-0000-0000-0000-000000000000

ComputerTargetId            :

LastUpdated                 : 12/10/2011 7:08:29 AM

By looking at the data provided, I can see that only 1 client out of the 3 require this update and that it has already been downloaded to that client. This is a nice way of providing a report of a specific update’s status for clients. But the problem is…which client requires the update? I will show you how to use the GetUpdateInstallationInfoPerComputerTarget() method on the Update object.

$update.GetUpdateInstallationInfoPerComputerTarget($ComputerScope)

Image of command output

Although it does tell you which client ID requires the update, it’s not what I would call “humanly readable” by any means. But this is Windows PowerShell, after all, and we can certainly make this better for us to read.

$update.GetUpdateInstallationInfoPerComputerTarget($ComputerScope) |

 Select @{L='Client';E={$wsus.GetComputerTarget(([guid]$_.ComputerTargetId)).FulldomainName}},

@{L='TargetGroup';E={$wsus.GetComputerTargetGroup(([guid]$_.UpdateApprovalTargetGroupId)).Name}},

 @{L='Update';E={$wsus.GetUpdate(([guid]$_.UpdateId)).Title}},

 UpdateInstallationState,UpdateApprovalAction

Image of command output

Much better! Now we have something that not only tells us which client, but actually gives us the name of the client without giving us a GUID to try to guess with. I also translated the Target Group ID and the Update ID so that everything is much easier to read.

Well, that wraps it up for today. Tomorrow I will jump into working with the Update Scope object and performing queries that use that object.

~Boe

Boe, thank you for another great blog about using the WSUS object model. WSUS Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Get Windows Update Status Information by Using PowerShell

$
0
0

Summary: Learn how to use the WSUS Update Scope with Windows PowerShell to get update status information for client computers.

Microsoft Scripting Guy, Ed Wilson, is here. What a week. Boe Prox has certainly been sharing quite a bit of Windows PowerShell goodness. In case you have missed them, here are links to the blog series thus far. You can also see Boe’s biography in the Day 1 blog.

Day 1: Introduction to WSUS and PowerShell

Day 2: Use PowerShell to Perform Basic Administrative Tasks on WSUS

Day 3: Approve or Decline WSUS Updates by Using PowerShell

Day 4: Use PowerShell to Find Missing Updates on WSUS Client Computers

Today we are moving from working with the Computer Target Scope and generating some cool reports to working with the Update Scope on the WSUS server. Much like yesterday, I will dive into some of the objects that we have worked with throughout the week and make use of some of the methods that require the Update Scope object to work properly.

If you are asking, “What is the Update Scope?”

…As the name implies, this is a scope that can be used to filter a list of updates on the WSUS server.

Creating the Update Scope object

Much as we did with the Computer Target Scope object, we need first to create the Update Scope object so we can then look at its properties and make adjustments, if needed. To create this object, we need to use the Microsoft.UpdateServices.Administration.UpdateScope class. For more information about this class, see UpdateScope Class on MSDN. The code that follows creates an instance of the UpdateScope.

$updatescope = New-Object Microsoft.UpdateServices.Administration.UpdateScope

OK, let’s take a look at these properties. The image that follows displays this information.

Image of command output

Here is a list of the editable properties for the Update Scope object:

ApprovedStates

 

Gets or sets the approval states to search for. An update will be included only if it matches at least one of the specified states. This value may be a combination of any number of values from ApprovedStates. Defaults to Any.

ExcludedInstallationStates

 

Gets or sets the installation states to exclude. An update will be included only if it does not have any computers in any of the specified states. This value may be a combination of any number of values from UpdateInstallationStates. Defaults to 0.

ExcludeOptionalUpdates

 

Gets or sets whether to exclude optional updates from the list.

FromArrivalDate

 

Gets or sets the minimum arrival date to search for. An update will be included only if its arrival date is greater than or equal to this value.

FromCreationDate

 

Gets or sets the minimum creation date to search for. An update will be included only if its creation date is greater than or equal to this value.

IncludedInstallationStates

 

Gets or sets the installation states to search for. An update will be included only if it has at least one computer in one of the specified states. This value may be a combination of any number of values from UpdateInstallationStates.

IsWsusInfrastructureUpdate

 

Gets or sets whether or not to filter for WSUS infrastructure updates. If set to true, only WSUS infrastructure updates will be included. If set to false, all updates are included. Defaults to false.

TextIncludes

 

Gets or sets the string to search for. An update will be included only if its Title, Description, Knowledge Base articles, or security bulletins contains this string.

TextNotIncludes

 

Gets or sets the string to exclude. An update will be not be included if its Title, Description, Knowledge Base articles, or security bulletins contains this string.

ToArrivalDate

 

Gets or sets the maximum arrival date to search for. An update will be included only if its arrival date is less than or equal to this value.

ToCreationDate

 

Gets or sets the maximum creation date to search for. An update will be included only if its creation date is less than or equal to this value.

UpdateApprovalActions

 

Gets or sets the update approval actions to search for. An update will be included only if it is approved to at least one computer target group for one of the specified approval actions. This value may be a combination of any number of values from UpdateApprovalActions. Defaults to All.

UpdateApprovalScope

 

Gets or sets the UpdateApprovalScope object that can be used to filter updates based on their approval properties.

UpdateSources

 

Gets or sets the update sources to search for. An update will be included only if its update source is included in this value. This value may be a combination of any number of values from UpdateSources.

UpdateTypes

 

Gets or sets the update types to search for. An update will be included only if its update type is included in this value.

Yes, that is a fairly large list of properties that we can modify to suit our filtering needs. For the sake of simplicity, I am only going to update ApprovedStates, IncludedInstallationStates, and FromArrivalTime to show how you can see all of the updates that are needed by all of the clients since the last patch Tuesday. The code that follows updates these three properties.

$updatescope.ApprovedStates = [Microsoft.UpdateServices.Administration.ApprovedStates]::NotApproved

$updatescope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::NotInstalled

$updatescope.FromArrivalDate = [datetime]"12/13/2011"

Note: For the purposes of this blog, the last patch Tuesday was December 13. I set the ApprovedStates property to NotApproved because I only want to see updates that are required by the clients that are not already approved for installation. I set the IncludedInstallationStates to NotInstalled. This will tell the filter to look for only updates that are required by the clients, but have not been installed yet. This works fine for us because these updates are new to us and because we only want those for the most recent patch Tuesday.

First, I am going to show you the Administration Console and look at all updates that are required by the clients on the network since patch Tuesday. This is shown in the image that follows.

Image of menu

Note the number of updates (30) and some of the titles listed here. Although not all of the updates are listed in the image, I will show you a couple of methods that we can use to pull the exact same information by using Windows PowerShell!

GetUpdateCount()

This method, as you can probably tell, will allow us to view the number of updates that are returned by using the previously configured Update Scope.

$wsus.GetUpdateCount($updatescope)

Image of computer output

Look at that, 30 updates—just as if it showed on the console.

I am going to expand on this slightly by using another method similar to one I showed you yesterday for the Computer Target Scope.

GetUpdateStatus()

This will return a little more information about the updates, as you will see. Besides supplying the Update Scope object, I need to supply a Boolean value for whether to include DownStream Computers.

$wsus.GetUpdateStatue($updatescope,$False)

Image of computer output

That is pretty cool, but we want to see those updates as well. Sure enough, we can use the GetUpdates() method and supply our Update Scope object to pull this information.

$wsus.GetUpdates($updatescope) | Select Title

Image of computer output

And there you have it! You can compare the screenshots and you will see that the titles in the console match the titles from our query.

Now I will show you how to view the update approvals by using the Update Scope object and the GetUpdateApprovals() method. To make this work and to provide some useful information, I will make a couple of adjustments to the existing Update Scope. This revision is shown here.

$updatescope.FromArrivalDate = [datetime]"10/01/2011"

$updatescope.ApprovedStates = [Microsoft.UpdateServices.Administration.ApprovedStates]::LatestRevisionApproved

Now we can use this scope in the method and pull some information. To do this, I use the GetUpdateApprovals method, and pass it the Update Scope. This technique is shown in the code that follows.

$wsus.GetUpdateApprovals($updatescope)

The result from the previous command is shown in the following image.

Image of computer output

Let us clean this up a little so it is more readable and not just a bunch of GUIDs. The code that follows produces an easier to read output.

$wsus.GetUpdateApprovals($updatescope) | Select @{L='ComputerTargetGroup';E={$_.GetComputerTargetGroup().Name}},

@{L='UpdateTitle';E={($wsus.GetUpdate([guid]$_.UpdateId.UpdateId.Guid)).Title}},GoLiveTime,AdministratorName,Deadline

The result of the previous code is shown here.

Image of computer output

The last two things I will show you in this blog are a couple of methods that require the Update Scope object and the Computer Target Scope object to work properly. They provide some nice reporting features similar to what you would see in the Administration Console.

GetSummariesPerComputerTarget()

This method gets per-computer summaries for each of the specified computers, summed across all of the specified updates.

Let’s see this in action, where I use the following code to gather this information.

$computerscope = New-Object Microsoft.UpdateServices.Administration.ComputerTargetScope

$updatescope = New-Object Microsoft.UpdateServices.Administration.UpdateScope

$wsus.GetSummariesPerComputerTarget($updatescope,$computerscope) |

    Format-Table @{L='ComputerTarget';E={($wsus.GetComputerTarget([guid]$_.ComputerTargetId)).FullDomainName}},

        @{L='NeededCount';E={($_.DownloadedCount + $_.NotInstalledCount)}},DownloadedCount,NotApplicableCount,NotInstalledCount,InstalledCount,FailedCount

The output from the previous code produces a nice table, as shown here.

Image of computer output

Let’s see how this compares to what you would see in the console. One thing you do not have available in the console is the DownloadedCount, which our previous code nicely displayed.

Image of menu

Lastly, let’s look at the GetSummariesPerUpdate() method to get per-update summaries for each of the specified updates, summed across all of the specified computers. The following code uses this method.

$updatescope.ApprovedStates = [Microsoft.UpdateServices.Administration.ApprovedStates]::NotApproved

$updatescope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::NotInstalled

$updatescope.FromArrivalDate = [datetime]"12/13/2011"   

$wsus.GetSummariesPerUpdate($updatescope,$computerscope) |

    Format-Table @{L='UpdateTitle';E={($wsus.GetUpdate([guid]$_.UpdateId)).Title}},

        @{L='NeededCount';E={($_.DownloadedCount + $_.NotInstalledCount)}},DownloadedCount,NotApplicableCount,NotInstalledCount,InstalledCount,FailedCount

Note: I changed the Update Scope to what I had at the beginning of this blog. This is so I can more accurately compare the output here with what is in the Administration Console.

The output from the previous code is shown here.

Image of computer output

Now compare the previous image, with the console (shown here), and you will see that they match up as planned.

Image of menu

So there you have it. We spent the week starting out with a WSUS server installation, then doing some basic administration, and we finished by providing some reports using Windows PowerShell—and we haven’t even begun to scratch the surface of what we can do. But working with WSUS doesn’t end here! This weekend I will talk about update 2.0 to my WSUS module, PoshWSUS, which you can use to more easily manage and maintain a WSUS server.

~Boe

Boe, this is great stuff. Thank you for sharing with us. WSUS Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Introduction to PoshWSUS, a Free PowerShell Module to Manage WSUS

$
0
0

Summary: Guest blogger, Boe Prox, discusses the development of the free Windows PowerShell module that he wrote to manage WSUS.

Microsoft Scripting Guy, Ed Wilson, is here. Today we are celebrating our 2000th Hey, Scripting Guy! blog. I think that in the spirit of community, it is appropriate that today we also have a guest blogger, Boe Prox, who is talking about a project that he developed and shared on CodePlex.

I will let him speak for himself—take it away Boe…

After a week of working with Windows PowerShell and WSUS and showing you some pretty cool things that you can do with the two, I am going to talk about a project that I have been working on that will make administering a WSUS server easier with Windows PowerShell. 

Here are the blogs from the past week if you would like to catch up:

Day 1: Introduction to WSUS and PowerShell

Day 2: Use PowerShell to Perform Basic Administrative Tasks on WSUS

Day 3: Approve or Decline WSUS Updates by Using PowerShell

Day 4: Use PowerShell to Find Missing Updates on WSUS Client Computers

Day 5: Get Windows Update Status Information by Using PowerShell

My plan today is to give an overview of what PoshWSUS is and why I built it the way I did, plus show a couple of small examples. Tomorrow I will show you more examples of using PoshWSUS to accomplish some of the same tasks that I have been showing earlier in the week in addition to a few new things.

What is PoshWSUS?

PoshWSUS is a module that I designed and built to help a system administrator manage a WSUS server easily by using Windows PowerShell. To download the module, see PoshWSUS in CodePlex.

Currently, there are 60 commands that are available in version 2.0, which was recently released on Dec. 18, 2011. There are various commands that one can use with this module. Although a majority of commands are query-type commands, there are other commands that allow you to approve or decline updates, create groups, add clients to groups, and work with synchronizations (to name a few). Here is a full list of the commands available:

Image of command output

PoshWSUS originally started as one module file (.psm1) that contained approximately 45 commands. This file was over 2000 lines long, which made it extremely difficult to work with, especially when it came time to troubleshoot one of the commands. 

When it came time to work on version 2.0 of PoshWSUS, I made the decision to separate each of the commands into its own file to make it much easier to work with. I can then use the module file to go through each of the files and dot source the functions from those files.

Why PoshWSUS?

I work with WSUS almost daily, and wanted a way to quickly perform a query or quick update approval against WSUS without having to jump onto the Administration Console and click, click, click my way to victory. Another piece of motivation that I used in writing this module is that there was no module for WSUS at all. Sure, there are various scripts available that perform various queries, approvals, and so on—but never a module that really tied everything together and covered a wide threshold of things to do in WSUS. Thus, PoshWSUS comes into play to fill in that gap and allow people to manage their WSUS server from the command line.

So with that, I will talk a little about some of the new things that my latest version brings into play.

Working with custom types

One of the things that I was determined to change was the way that the output from any command would be displayed in the console. By default, if you run a command that would list a client or update, you would be crushed by a sea of data for each client and update that was returned unless you used Select-Object or Format-Table and listed only the properties that you wanted to see. Obviously, I couldn’t add this into my commands because I would lose the object type and also limit the amount of data that was available to a user.

Here is an example of such output from an update query:

Pretty wild, isn’t it?  Well, thanks to custom-type formatting, I can dictate what is displayed by each output while still preserving the object type.  Now, I won’t go into detail about how this works because that would be a blog (or two) in itself. But as you can see in the following example, it greatly simplifies the display of the output while still retaining the object’s type.

Image of command output

So it looks better AND retains the object type.

Even if you use Format-List, it will not show every property unless you specify by using the “*”. This is shown here.

Image of command output

Adding properties to an existing type

While most of the data that is returned by a query is useful, sometimes you get some properties that aren’t as helpful in their current state. For instance, let’s look at the following output:

Image of command output

The Updated is not exactly something that we can use because we have no idea what update it is referring to. By adding some extra properties for this type (Microsoft.UpdateServices.Internal.BaseApi.UpdateSummary), we can translate that ID into the following:


Image of command output

As you can tell, there are a couple of properties that are null, and this is by design for this object. How this query is performed will determine which of the UpdateTitle, ComputerGroup, and Computer properties would be populated. This technique is used with several types to provide clearer data that is returned on various queries.

Most of the information about working with type formatting was via Jeffery Hicks and my time at the PowerShell Deep Dive where he presented on the subject. Here is some information about Jeff:
Blog: The Lonely Administrator
Twitter: https://twitter.com/JeffHicks
YouTube video of Deep Dive session: Mastering Format and Type Extensions

Removing plural nouns

Another thing that I changed was some of the names of the commands in the module. Originally, I had several modules that were named with a plural noun and another that was singular, such as Get-WSUSClient and Get-WSUSClients. It is poor practice in Windows PowerShell to have a plural-named noun for a command, so I made sure to merge those commands into one that used a singular noun.

Providing feedback

Feedback from the community drives any project, and this module is no different. I encourage anyone who uses this module to please let me know what they like and don’t like about it. Any bugs that you find or anything that you feel would make a good feature is always helpful to the growth of PoshWSUS. You can log issues and ideas in the CodePlex Issue Tracker.

That pretty much sums up my module, PoshWSUS and gives you a little more information about its beginnings and some of the new things I did with version 2.0. Tomorrow I will go into how to use PoshWSUS, explore some of the commands, show you how to achieve some of the same results from some of the commands we have been using during the past week, and show you a few new things. See you tomorrow!

~Boe

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use the Free PoshWSUS PowerShell Module for WSUS Administrative Work

$
0
0

Summary: Learn how to use the free PoshWSUS Windows PowerShell module to administer your WSUS server.

Microsoft Scripting Guy, Ed Wilson, is here. We wrap up the weekend and the week with guest blogger, Boe Prox. In case you missed them, here are links for Boe’s blogs:

Day 1: Introduction to WSUS and PowerShell

Day 2: Use PowerShell to Perform Basic Administrative Tasks on WSUS

Day 3: Approve or Decline WSUS Updates by Using PowerShell

Day 4: Use PowerShell to Find Missing Updates on WSUS Client Computers

Day 5: Get Windows Update Status Information by Using PowerShell

Day 6: Introduction to PoshWSUS, a Free PowerShell Module to Manage WSUS

Take it away, Boe…

We are at the final stopping point in our week of WSUS and this final blog is all about WSUS administration using my PoshWSUS module. The past week has seen us create a WSUS server and managing the server by using Windows PowerShell and the WSUS assemblies, but now we are going to look at this using fewer commands by using the PoshWSUS module. I will now show some examples that use the module to perform some of the exact commands that were done earlier in the week.

Initial use of PoshWSUS

To download the module, see PoshWSUS in CodePlex. Unzip the files to your Modules directory—in my case for Windows 7, it is C:\Users\Boe\Documents\WindowsPowerShell\Modules. I saved the modules to a folder named PoshWSUS. This location is shown here.

Image of menu

Load the module by using the Import-Module cmdlet. This command is shown here.

Import-Module PoshWSUS

The following image shows me using the Get-Module cmdlet to view the available cmdlets and using the Import-Module cmdlet to import the module.

Image of command output

Make your initial connection to the WSUS server. To do this, use the Connect-WSUSServer cmdlet as shown here.

Connect-WSUSServer -WsusServer DC1

The following image shows the connection to the WSUS server named DC1.

Image of command output

Now you are set to go!

Client and group management

Let’s start by looking at some basic client and group administration commands that you may find yourself using when you administer the server. The following example lists all clients that are registered in the WSUS Server.

Get-WSUSClient

The output from the Get-WSUSClient cmdlet is shown here.

Image of command output

If you only want to list a specific client that is registered on the WSUS server, you can run the following command to get the data.

Get-WSUSClient -Computer boe-pc

The following image shows the result of running the Get-WSUSClient cmdlet. Note that it provides lots of good information about the client computer.

Image of command output

For some extra fun, you can even use the Group-Object cmdlet to group all of your clients by operating system. A command to do this is shown here.

Get-WSUSClient | Group OSDescription | Select Count,Name

The command and associated output are shown in the image that follows.

Image of command output

If you want to display all of the target groups that are in the WSUS server, you can run use the Get-WSUSGroup command.

Get-WSUSGroup

Image of command output

If you want to add a client into a specific group, you can run this one-liner:

Get-WSUSClient boe-pc | Add-WSUSClientToGroup -Group TESTGROUP

Let’s verify that my client is actually in that group by using Get-WSUSClientGroupMembership.

Get-WSUSClientGroupMembership -Computer boe-pc

Image of command output

Looking at the returned data, it would appear that the client is in the TESTGROUP group.

Removing a WSUS client from a group is just as simple as using Remove-WSUSClientFromGroup. This command is shown here.

Remove-WSUSClientFromGroup -Computer boe-pc -Group TESTGROUP

One more check to see if my client is no longer in TESTGROUP. The results are shown here.

Image of command output

As you can see, my client is now a member of Unassigned Computers, meaning that it is not a member of any group in WSUS other than the default All Computers group.

Creating a new group on WSUS is made simple by using New-WSUSGroup. Here, I create a new group named MyGroup.

New-WSUSGroup “MYGROUP” –PassThru

The results of creating the new MyGroup group are shown in the following image.

Image of command output

In addition to creating new groups, I can remove a WSUS group. It is just as easy to remove a group, as it is to create a group. I use the Remove-WSUSGroup cmdlet. An example of doing this is shown here.

Get-WSUSGroup -Name MYGROUP | Remove-WSUSGroup

Update administration

Now for some more fun stuff: using PoshWSUS to work with Update administration. I have two commands (Approve-WSUSUpdate and Deny-WSUSUpdate) that make approving and declining updates easy to do. Besides that, you can use Get-WSUSUpdate to look for updates on the WSUS server.  Unfortunately, “Decline” is not an approved verb in Windows PowerShell, so I went with “Deny” instead.

Using Get-WSUSUpdate without any parameters will return every single update on the WSUS server. In this case, I want to see all of the Windows 7 updates.

$updates = Get-WSUSUpdate 'Windows 7'

$updates.count

$updates | Select -first 10

Image of command output

Approve-WSUSUpdate does take pipeline input from the Update object, so if you wanted to approve some updates that you queried, you could do the following:

$updates[1..10] | Approve-WSUSUpdate -Action Install -Group 'All Computers' –PassThru

Image of command output

In the same way, you can decline updates by piping the updates into Deny-WSUSUpdate. This command is shown here.

$updates[1..10] | Deny-WSUSUpdate

Image of command output

Here is a fun one! You can use Get-WSUSUpdate and New-WSUSUpdateScope to list all the updates that have been released on the last Patch Tuesday and are required by the clients, as shown here.

Get-WSUSUpdate -UpdateScope (New-WSUSUpdateScope -IncludedInstallationStates NotInstalled -FromArrivalDate "12/13/2011")

Image of command output

You can then pipe this output into Approve-WSUSUpdate:

Get-WSUSUpdate -UpdateScope (New-WSUSUpdateScope -IncludedInstallationStates NotInstalled -FromArrivalDate "12/13/2011") | Approve-WSUSUpdate -Action Install -Group 'All Computers' –PassThru

Image of command output

It is always a good idea to review each update to make sure that you want to approve it in your environment.

Reporting

So I have covered some administration examples that use this module, but what about some type of reporting capability? Well, I will show you some different types of reports that you can do with this module.

I can look for updates that have failed to install by using the following command:

Get-WSUSUpdatePerClient -UpdateScope (New-WSUSUpdateScope -IncludedInstallationStates Failed)

Fortunately for me (but unfortunate for this example), I do not have any updates that reported failures in the installation.

I can mimic what is shown in the following example of the Administration Console by using Get-WSUSUpdateSummaryPerClient.

Image of menu

Get-WSUSUpdateSummaryPerClient -ComputerScope (New-WSUSComputerScope) -UpdateScope (New-WSUSUpdateScope)

The output is shown here:

Image of command output

We can even generate a report of update approvals by using Get-WSUSUpdateApproval if you want to go back to see what has been approved and/or who approved a specific update. In this case, let’s go back and look at my update approvals for all of the updates from this Patch Tuesday.

$updatescope = New-WSUSUpdateScope -FromArrivalDate "12/13/2011"

Get-WSUSUpdateApproval -UpdateScope $updatescope

Image of command output

If you wanted to get a report of patches that are needed by clients in a specific group, you can give this command a run:

Get-WSUSUpdateSummaryForGroup -GroupName 'All Computers' -UpdateObject $updates[200..300]

Image of command output 

Other commands to use

Besides the commands that I have shown in this blog post, there are other commands that you may find useful during your administration of a WSUS server. So, without further ado, here are some command examples:

Show the current synchronization schedule by using Get-WSUSSubscription:

Get-WSUSSubscription

Image of command output

Listing all Install Approval rules on your WSUS server is also easy by using Get-WSUSInstallApprovalRule. For those of you who are not familiar with Install Approval, basically it allow you set up automatic approvals of specific products for specific computer target groups. Pretty nice if you know that you want to approve critical security updates for all of your clients every month.

Get-WSUSInstallApprovalRule

Image of command output

In the same way, you can configure a new Install Approval rule by using New-WSUSApprovalRule and Set-WSUSApprovalRule.

$group = Get-WSUSGroup -Name 'All Computers'

$class = Get-WSUSUpdateClassification | Where {$_.Title -eq "Updates"}

New-WSUSInstallApprovalRule -Name "Rule1" -Category $cat -Classification $class -Group $group –Enable –PassThru

Image of command output

You can even run the Install Approval rules by using Start-WSUSInstallApprovalRule.

Start-WSUSInstallApprovalRule -Name "Rule1"

If you wanted to locate the files for each update and their location on the server, you can run the Get-WSUSInstallableItem command.

$updates[0] | Get-WSUSInstallableItem

Image of command output

You can dig into the files a little deeper to see how many files and where exactly they are located:

$updates[0] | Get-WSUSInstallableItem | Select –Expand Files

Image of command output

That wraps it up for my week with WSUS and Windows PowerShell and also talking about my PoshWSUS module. I have more updates planned, including adding more commands (such as publishing non-Microsoft updates to WSUS) and updating existing commands. Remember that if you see something that you would like to see added to the module or you find a bug in it, please log your issues and ideas in the CodePlex Issue Tracker. Thank you everyone for checking out my blogs and thanks to Ed for allowing me to take over a week of Hey, Scripting Guy! to talk WSUS and Windows PowerShell!

~Boe

Thanks, Boe, for an awesome week of Windows PowerShell goodness. Join us tomorrow for a new week on the Hey, Scripting Guy! blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Learn How to Use PowerShell to Run Exchange Commands Remotely

$
0
0

Summary: Learn how to use Windows PowerShell to run Exchange Server 2010 commands remotely by using implicit remoting.

Hey, Scripting Guy! Question Hey, Scripting Guy! I am having a problem with Exchange Server 2010. I have Windows PowerShell 2.0 installed on the server, but I am unable to connect and to do anything with it using Windows PowerShell remoting. I am forced to use RDP to make a remote desktop session, and then to launch the Exchange Server Management Shell to be able to do anything. I think this is rather stupid, especially when I can use Windows PowerShell remoting to establish a remote Windows PowerShell session on any other server in the domain, import whatever modules I need to use, and complete my work. I know that Exchange Server 2010 uses snap-ins, not modules; but still, it should work. I hate to rebuild my server, but I am about to do so because I am unable to fix whatever is broken. Can you help me? It will save me an entire weekend if you can.

—SW

Hey, Scripting Guy! AnswerHello SW,

Microsoft Scripting Guy, Ed Wilson, is here. Please do not FDISK your server! I say again, please do not FDISK your server. It will really be a waste of time. What you are no doubt experiencing is, in fact, the intended experience. You have no doubt discovered that the Exchange Server 2010 cmdlets are a little different—in fact, they are all functions. The great thing about this is that you can look at the content of every one of the functions to discover how they work.

One thing you need to know is that there is a difference between implicit remoting and explicit remoting. With explicit remoting, you create a remote session, enter a remote session, and you are dropped onto a Windows PowerShell console prompt on the remote computer. The Windows PowerShell prompt that you see is remote—it resides on the remote computer. Typing dir into the prompt displays the file system structure of the remote computer.

With implicit remoting, the commands from the remote session come to the local computer. Therefore, the Windows PowerShell prompt you see is local—it remains on your computer. Typing dir into the prompt displays the file system structure of the local computer.

If I make a remote connection to a server running Exchange Server 2010, and I add the three management snap-ins for Exchange Server, Windows PowerShell displays no errors. However, when I attempt to run a common Exchange Server command, such as Get-ExchangeServer, an error appears. The commands, and associated errors are shown in the image that follows.

Image of command output

The secret to using Windows PowerShell remoting to remotely manage a server running Exchange Server 2010 is to use implicit remoting instead of explicitly connecting to a remote Windows PowerShell session. Here are the steps required to create an implicit remote Windows PowerShell session.

  1. Use the Get-Credential cmdlet to obtain credentials for the server running Exchange Server 2010. Store the returned credential object in a variable.
  2. Use the New-PSSession cmdlet to create a new session on the server. Specify the ConnectionUri in the form of http://servername/powershell and supply the credential object from step 1 to the Credential parameter. Store the returned session object in a variable.
  3. Use the Import-PSSession cmdlet to connect to the session created in step 2.

The code that follows illustrates connection to a remote server running Exchange Server 2010 (named EX1) as the administrator from the iammred domain.

PS C:\> $cred = Get-Credential iammred\administrator

PS C:\> $session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri

 http://ex1/powershell -Credential $cred

PS C:\> Import-PSSession $session

When the Import-PSSession command runs, a warning appears that states some of the imported commands use unapproved verbs. This is a normal warning for the Exchange Server commands, and can be safely ignored.

Note: The warning message is also why when one is writing Windows PowerShell modules, one should always use approved verbs. This avoids confusing users with the warning message. Display approved verbs by using the Get-Verb cmdlet.

When connected, the Exchange Server cmdlets appear in the local Windows PowerShell console and they work as if they locally installed. To obtain information about servers running Exchange Server, use the Get-ExchangeServer cmdlet. This command appears here.

Get-ExchangeServer

Because there are several steps involved in making an implicit remoting session to a remote server running Exchange Server 2010, it becomes a good candidate for a function. Because the function is a bit long (with the comment-based Help), I also uploaded it to the Scripting Guys Script Repository. Here is the complete New-ExchangeSession function.

Function New-ExchangeSession

{

  <#

   .Synopsis

    This function creates an implicit remoting connection to an Exchange Server

   .Description

    This function creates an implicit remoting session to a remote Exchange

    Server. It has been tested on Exchange 2010. The Exchange commands are

    brought into the local PowerShell environment. This works in both the

    Windows PowerShell console as well as the Windows PowerShell ISE. It requires

    two parameters: the computername and the user name with rights on the remote

    Exchange server.

   .Example

    New-ExchangeSession -computername ex1 -user iammred\administrator

    Makes an implicit remoting connection to a remote Exchange 2010 server

    named ex1 using the administrator account from the iammred domain. The user

    is prompted for the administrator password.

   .Parameter ComputerName

    The name of the remote Exchange server

   .Parameter User

    The user account with rights on the remote Exchange server. The user

    account is specified as domain\username

   .Notes

    NAME:  New-ExchangeSession

    AUTHOR: ed wilson, msft

    LASTEDIT: 01/13/2012 17:05:32

    KEYWORDS: Messaging & Communication, Microsoft Exchange 2010, Remoting

    HSG: HSG-1-23-12

   .Link

     Http://www.ScriptingGuys.com

 #Requires -Version 2.0

 #>

 Param(

  [Parameter(Mandatory=$true,Position=0)]

  [String]

  $computername,

  [Parameter(Mandatory=$true,Position=1)]

  [String]

  $user

  )

  $cred = Get-Credential -Credential $user

  $session = New-PSSession -ConfigurationName Microsoft.Exchange `

    -ConnectionUri http://$computername/powershell -Credential $cred

  Import-PSSession $session

} #end function New-ExchangeSession

To gain access to the New-ExchangeSession function, I dot-source the script that contains the New-ExchangeSession function into my current Windows PowerShell session. When I run the function, a credential dialog box appears. This dialog box is shown in the image that follows.

When I enter the credentials, the implicit remoting session starts, and I can use cmdlets for Exchange Server as if they were installed on the local computer. In the following command, I retrieve information about Microsoft Exchange Server mailbox databases.

PS C:\> Get-MailboxDatabase

 

Name                           Server          Recovery        ReplicationType

----                           ------          --------        ---------------

Mailbox Database 1301642447    EX1             False           None

One command that is not available is the Get-ExCommand. I will talk about that tomorrow.

SW, that is all there is to using commands for Exchange Server 2010 from a remote computer. Join me tomorrow when I will talk about discovering imported commands for Exchange Server. I will also create a function to display the commands.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Gain Remote Access to the Get-ExCommand Exchange Command

$
0
0

Summary: Learn how to gain access to the Get-ExCommand Exchange command while in an implicit remote Windows PowerShell session.

Hey, Scripting Guy! Question Hey, Scripting Guy! I liked your idea about connecting remotely to Windows PowerShell on an Exchange Server. The problem is that I do not know all of the cmdlet names. When I am using RDP to connect to the Exchange server, there is a cmdlet named Get-ExCommand that I can use to find what I need. But when I use your technique of using Windows PowerShell remoting to connect to the Exchange Server, for some reason Get-ExCommand cmdlet does not work. Am I doing something wrong? Please help.  

—JM

Hey, Scripting Guy! Answer Hello JM,

Microsoft Scripting Guy, Ed Wilson, is here. Well, it looks like my colleagues in Seattle are starting to dig out from the major snowstorm they received last week. Here in Charlotte, it has been sunny and cool. Of course, Seattle does not get a lot of 100 degrees Fahrenheit (37.7 degrees Celsius) days in the summer. Actually, the temperature is not what is so bad, but rather it is the humidity that is oppressive. A day that is 100 degrees Fahrenheit with 85% humidity makes a good day to spend in the pool, or to spend writing Windows PowerShell scripts whilst hugging an air conditioner. Back when I was traveling, the Scripting Wife and I usually ended up in Australia during our summer (and their winter)—it is our favorite way to escape the heat and the humidity. Thus, fall and winter in Charlotte is one of the reasons people move here—to escape the more rugged winters in the north. Anyway…

Yesterday, I wrote a useful function that makes a remote connection to a server running Exchange Server 2010 and brings all of the Exchange commands into the current session. This function uses a technique called implicit remoting.

It is unfortunate that the Get-ExCommand command is not available outside the native Exchange Server Management Shell, because the Exchange commands are not all that discoverable by using normal Windows PowerShell techniques. For example, I would expect to be able to find the commands via the Get-Command cmdlet, but as is shown here, nothing returns.

PS C:\> Get-Command -Module *exchange*

PS C:\>

The Get-ExCommand cmdlet is actually a function and not a Windows PowerShell cmdlet. In reality, it does not make much of a difference that Get-ExCommand is not a cmdlet, except that with a function, I can easily use the Get-Content cmdlet to figure out what the command actually accomplishes. The function resides on the function drive in Windows PowerShell, and therefore the command to retrieve the content of the Get-ExCommand function looks like this:

Get-Content Function:\Get-ExCommand

The command and output associated with that command when run from within the Exchange Server Management Shell are shown in the image that follows.

Image of command output

The following steps are needed to duplicate the Get-ExCommand function:

  1. Open the Windows PowerShell ISE (or some other script editor).
  2. Establish a remote session onto an Exchange Server. Use the New-ExchangeSession function from yesterday’s Hey, Scripting Guy! blog.
  3. Make an RDP connection to a remote Exchange Server and use the Get-Content cmdlet to determine the syntax for the new Get-ExCommand command.
  4. Use the Windows PowerShell ISE (or other script editor) to write a new function that contains the commands from Step 2 inside a new function named Get-ExCommand.

In the image that follows, I run the New-ExchangeSession function and make an implicit remoting session to the server named “ex1,” which is running Exchange Server 2010. This step brings the Exchange commands into the current Windows PowerShell environment and provides commands with which to work when I am creating the new Get-ExCommand function.

Image of command output

Here is a version of the Get-ExCommand function that retrieves all of the Microsoft Exchange commands.

Function Get-ExCommand

{

 Get-Command -Module $global:importresults |

 Where-Object { $_.commandtype -eq 'function' -AND $_.name -match '-'}

} #end function Get-ExCommand

I copied the portion of the function that retrieves the module name from the $global namespace. It came from the contents of the Get-ExCommand function from the server running Exchange Server 2010. One of the nice things about functions is that they allow the code to be read.

I added the Where-Object to filter out only the functions. In addition, I added the match clause to look for a “-“ in the function name. This portion arose because of the functions that set the working location to the various drive letters.

To search for Exchange cmdlets that work with the database requires the following syntax.

Get-ExCommand | where { $_.name –match ‘database’}

That is not too bad, but if I need to type it on a regular basis, it rapidly becomes annoying.

In the original Get-ExCommand function, the function uses the $args automatic variable to determine the presence of an argument to the function. When an argument exists, the function uses that and attempts to use the Get-Command cmdlet to retrieve a CmdletInfo object for the command in question. This is helpful because it allows the use of wildcards to discover applicable Windows PowerShell cmdlets for specific tasks.

I decided to add a similar capability to my version of the Get-ExCommand function, but instead of using the $args variable, I created a command-line parameter named Name. To me, it makes the script easier to read. The following is the content of the Get-ExCommand function.

Function Get-ExCommand

{

 Param ([string]$name)

 If(!($name))

  {Get-Command -Module $global:importresults |

   Where-Object { $_.commandtype -eq 'function' -AND $_.name -match '-'} }

 Else

  {Get-Command -Module $global:importresults |

   Where-Object { $_.commandtype -eq 'function' -AND

   $_.name -match '-' -AND $_.name -match $name} }

} #end function Get-ExCommand

The first thing the Get-ExCommand function does is to create the $name parameter. Next, the if statement checks to see if the $name parameter exists on the command line. If it does not exist, the same syntax the previous version utilized appears. If the $name parameter does exist, an additional clause to match the value of the $name parameter appears.

The following code illustrates searching for all Exchange commands related to the database.

Get-ExCommand database

The image that follows illustrates using the Get-ExCommand function, and the associated output.

Image of command output

The complete Get-ExCommand function, including comment-based Help, appears in the Scripting Guys Script Repository.

JM, that is all there is to gaining access to the Get-ExCommand command in a remote Windows PowerShell session. Join me tomorrow for more cool stuff. Until then, keep on scripting.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Audit Changes Made to Exchange Server 2010

$
0
0

Summary: Learn how to use a new Exchange Server 2010 cmdlet to audit via Windows PowerShell changes made to the server.

Hey, Scripting Guy! Question Hey, Scripting Guy! I am not sure this is a scripting question, but I need help, and you seem to like to help people. In fact, if you cannot help me, I might be looking for a new job. Here is the deal. We are running Microsoft Exchange Server 2010 and we are running Service Pack 1. We are currently evaluating Service Pack 2; but frankly, after the trouble we had upgrading to Service Pack 1, it might be a while. The problem is that there are several Exchange administrators in our company, and recently we have been having all sorts of Exchange Server problems. I have checked the Application Logs and the Security Logs to try to see who has been making the bad changes to our server, but I do not see anything that can help me chase it down. Please don’t tell me I have to enable something, because I would love to prove it is not me who has been messing up.

—BV

Hey, Scripting Guy! AnswerHello BV,

Microsoft Scripting Guy, Ed Wilson, is here. Back when I was a consultant, I was called in to help a company determine who had been embezzling money by manipulating their accounting database. I was able to identify the changes; unfortunately, they were all made via the Administrator account. BV, if your Exchange administrators use their own user accounts to do their work, and they are not using a generic logon account, it is very possible that changes made to your Exchange Server 2010 are logged. This is because Exchange Server 2010 has a feature called Administrator Audit Logging. The good news is that new installations of Exchange Server 2010 Service Pack 1 enable this logging by default.

To make it easier to work remotely, I wrote about the New-ExchangeSession function on Monday and the Get-ExCommand function and on Tuesday in the Hey, Scripting Guy! Blog. I uploaded the two functions to the Exchange Server 2010 Helper Function library in the Scripting Guys Repository. You can find them here:

In keeping with my Windows PowerShell best practices, this function library incorporates two new aliases for the two functions. The first alias is NXS for the New-ExchangeSession function, and the second alias is GCX for the Get-ExchangeCommand. The first thing to do is to dot source the two functions into my current environment. Here is an example of how to do that (note there is a space between the period and the path to the function library. This command brings both functions and both aliases into the current environment.

. E:\data\ScriptingGuys\2012\HSG_1_23_12\ExchangeHelperFunctions.ps1

The commands to import the helper function library, use the aliases to create a new implicit remoting session to the server running Exchange Server 2010, and then use the alias to retrieve the Exchange commands appears in the image that follows.

Image of command output

After establishing an implicit remoting session to the remote server, the Windows PowerShell console is ready to use to manage the server. One of the cool features of Exchange Server 2010 is the Administrator Audit Logging feature. This feature logs when a user or an administrator makes a change to the Exchange organization. This permits the ability to trace back changes to a specific user for auditing purposes. In addition, the detailed logging provides a history of changes to the organization, which are useful from a regulatory compliance perspective or as a troubleshooting tool.

By default, Microsoft Exchange Server 2010 Service Pack 1 enables audit logging on new installations. To determine the status of audit logging, use the Get-AdminAuditLogConfig command. The use of this command and the associated output from the command are shown in the following image.

Image of command output

In a large network, it might be preferable to specify a specific domain controller from which to retrieve the administrator audit logging configuration. To do this, use the DomainController parameter. On my network, I can use the host name or the fully qualified domain name (fqdn). These two commands are shown here:

Get-AdminAuditLogConfig -DomainController dc1

Get-AdminAuditLogConfig -DomainController dc1.iammred.net

Prior to Service Pack 1, when enabled, the Administrator Audit Logging feature sent emails to a specific audit-log mailbox configured via the Set-AdminAuditLogConfig cmdlet, and it was examined via an email client. After Exchange Server 2010 Service Pack 1, the audit entries reside in a hidden mailbox, and the Search-AdminAuditLog cmdlet retrieves the entries. The mailbox appears in the Users container in the Active Directory Users and Computers tool, and it is possible to obtain statistics about this mailbox by using the Get-MailboxStatistics cmdlet. This command is shown here:

Get-MailboxStatistics "SystemMailbox{e0dc1c29-89c3-4034-b678-e6c29d823ed9}"

Exchange Server 2010 Service Pack 1 has a cmdlet called Search-AdminAuditLog. When run without any parameters, the Search-AdminAuditLog cmdlet returns all records. By default, the retention period is 90 days (on a fresh Exchange Server 2010 Service Pack 1 installation). Configure the retention period by using the Set-AdminAuditLog cmdlet.

When you make a change to the administrator audit logging, keep in mind that changes rely on Active Directory replication to take place; and therefore, they could take up to an hour to replicate through the domain. Also, keep in mind that changes to auditing apply to the entire Exchange organization—there is no granularity. The following command sets the retention period to 120 days.

Set-AdminAuditLogConfig -AdminAuditLogAgeLimit 120

To retrieve all of the admin audit logs, use the Search-AdminAuditLog cmdlet without any parameters, as shown here:

Search-AdminAuditLog

The command to retrieve all of the admin audit logs and the output that is associated with that command is shown in the image that follows.

Image of command output

It is certainly possible to pipe the results from the Search-AdminAuditLog cmdlet to a Where-Object cmdlet, but it is better to use the parameters when possible. For example, to see only changes from the administrator user account, use the UserIDs parameter as shown here:

Search-AdminAuditLog -UserIds administrator

To see audit log entries that occur prior to a specific date, use the EndDate parameter. The following commands retrieve audit entries from events created by the administrator user account that occurred prior to January 18, 2012.

Search-AdminAuditLog -UserIds administrator -EndDate 1/18/12

Search-AdminAuditLog -UserIds administrator -EndDate "january 18, 2012"

To review only the audit entries that are generated by a specific cmdlet use the Cmdlets parameter. The following example only retrieves audit entries that are generated by the Enable-Mailbox cmdlet.

Search-AdminAuditLog -Cmdlets Enable-Mailbox

The Cmdlets parameter accepts an array of cmdlet names. To find audit events that are generated by either the Enable-Mailbox or the Set-Mailbox cmdlet use the command shown here:

Search-AdminAuditLog -Cmdlets Enable-Mailbox, Set-Mailbox

One really powerful feature of the admin auditing framework is to use the New-AdminAuditLogSearch cmdlet. In addition to searching the admin audit logs, this cmdlet also emails the report when it is completed. The email includes an XML attachment that contains the results from the search. The StartDate and the EndDate parameters are mandatory parameters that limit the size of the returned report. Reports are limited to 10 megabytes in size, and they take up to 15 minutes to arrive in the Inbox. The following command is a single logical line command (no line continuation characters) that creates a new report of all Enable-Mailbox commands used between 1/1/2012 and 1/18/2012. The command emails the report to edwilson@iammred.net.

New-AdminAuditLogSearch -cmdlets enable-Mailbox -StatusMailRecipients edwilson@iammred.net -StartDate 1/1/2012 -EndDate 1/18/2012

The command and the output associated with the command are shown in the image that follows.

Image of command output

The image that follows is the email with the search results from the previous query.

Image of email message

The XML attachment is shown in the image that follows.

Image of XML script

Refer to the Hey, Scripting Guy! Is There an Easier Way to Work with XML Files blog for information about using Windows PowerShell to work with XML files.

BV, that is all there is to using the administrator audit log cmdlets. Join me tomorrow for more Windows PowerShell cool stuff. 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Parse XML Exchange Audit Results

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to parse XML-formatted Microsoft Exchange Server 2010 audit reports.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a problem. The search results that return from the New-AdminAuditLogSearch cmdlet are pretty useless. Looking at all that XML in Internet Explorer is not very fun, and the Import-CliXML Windows PowerShell cmdlet is unable to make heads or tails of the XML. I can open it in Microsoft Excel 2010, but the problem is that it seems to create 30 or 40 rows for each record that is returned by the search. Can you help me? Surely, there is an easier way to do this.

—MJ

Hey, Scripting Guy! AnswerHello MJ,

Microsoft Scripting Guy, Ed Wilson, is here. Before I jump into answering your question, I want to put in a plug for International PowerShell User Group Day, which is March 19, 2012. The goal of the International PowerShell User Group Day is to have one global Windows PowerShell user group meeting. This will be a live broadcast from the Arizona PowerShell User Group in Phoenix, Arizona, and the program will include Don Jones and me. (Other speakers are still being lined up.) If you are a Windows PowerShell user group leader, you should consider moving your March meeting to March 19, so your group can dial in and participate in the meeting. I will be speaking from the Charlotte Windows PowerShell User Group meeting in Charlotte, North Carolina, and it will be a lot of fun. If you need to see if there is a Windows PowerShell user group in your area, see the PowerShell Community Groups page.  

MJ, in yesterday’s blog, Use PowerShell to Audit Changes Made to Exchange Server 2010, I talked about using the New-AdminAuditLogSearch cmdlet to run an audit and to email a report that contains the results of the audit query.

Note   I have written several blogs about working with XML from inside Windows PowerShell, and I have also had several great guest bloggers who have also addressed the topic of XML and IT Pros. From a beginner perspective, my blog called Is There an Easier Way to Work with XML Files? is a great place to start.

The XML file that generates from the New-AdminAuditLogSearch cmdlet is a standard formatted file, and it appears in XML Notepad as shown in the image that follows.

Image of XML file

The previous image shows the properties of the first event as follows:

Caller

Cmdlet

Error

ObjectModified

OriginatingServer

RunDate

Succeeded

The two following properties contain not simple strings, but other objects. These two properties show up as additional XmlElements.

CmdletParameters

ModifiedProperties

One thing that is often confusing is that the Import-CliXML cmdlet does not import just any old XML file—it imports only specially formatted XML that generates from the Export-CliXML cmdlet. This is why, your efforts to import via Import-CliXML did not work.

The easiest way to read the contents of the SearchResults.xml file is to use the Get-Content cmdlet to read the contents of the file, and then to cast the type to a System.Xml.XmlDocument type by using the [xml] type accelerator. This is much easier to accomplish than it sounds.

In the following example, the SearchResult.XML file (generated in yesterday’s blog) saved from Outlook, resides in the C:\fso folder. I then use the [XML] type accelerator to convert the text, derived via the Get-Content cmdlet into an XmlDocument.

[xml]$xml = Get-Content C:\fso\SearchResult.xml

When I view the contents of the $xml variable, the searchresults XmlElement appears as illustrated here.

PS C:\> $xml

 

xml                                        SearchResults

---                                        -------------

version="1.0" encoding="utf-8"             SearchResults

To view the objects that are stored in the SearchResults XmlElement, use dotted notation to access the SearchResults property. This technique is shown here.

PS C:\> $xml.SearchResults

 

Event

-----

{Event, Event, Event}

In this example, the SearchResults XmlElement contains three objects, each named Event. To view the objects, use dotted notation to access the Event property as shown here.

$xml.SearchResults.Event

As shown in the image that follows, the Event property contains the audit information that an Exchange administrator seeks.

Image of command output

I can now use standard Windows PowerShell techniques to analyze the data. For example, if I am only interested in the caller and cmdlet that were run during the period of the report, I can use the Select-Object cmdlet as shown here.

PS C:\> $xml.SearchResults.Event | select caller, cmdlet

 

Caller                                     Cmdlet

------                                     ------

iammred.net/Users/Administrator            Enable-Mailbox

iammred.net/Users/Administrator            Enable-Mailbox

iammred.net/Users/Administrator            Enable-Mailbox

I can output to a table by using the Format-Table cmdlet. The following command selects the RunDate, Caller and Cmdlet and outputs to an automatically sized table.

PS C:\> $xml.SearchResults.Event | Format-Table rundate, caller, cmdlet -AutoSize

 

RunDate                   Caller                          Cmdlet

-------                   ------                          ------

2012-01-17T18:04:16-05:00 iammred.net/Users/Administrator Enable-Mailbox

2012-01-17T18:04:09-05:00 iammred.net/Users/Administrator Enable-Mailbox

2012-01-17T18:03:53-05:00 iammred.net/Users/Administrator Enable-Mailbox

The results, stored in the $xml variable are addressable via array index notation. To view the RunDate of the first event, use the [0] notation to retrieve the first element. This technique is shown here.

PS C:\> $xml.SearchResults.Event[0].rundate

2012-01-17T18:04:16-05:00

One really cool way to parse the data is to select the appropriate properties and pipe the results to the Out-GridView cmdlet. It is necessary to use the Select-Object cmdlet to choose the properties because the Out-GridView does not accept a complex object. Therefore, a direct pipeline fails. This technique is shwon here.

$xml.SearchResults.Event | select caller, rundate, cmdlet | Out-GridView

The resulting Grid is shown in the image that follows.

Image of search results

MJ, that is all there is to using Windows PowerShell to parse the XML search results from Exchange Server. Exchange Server Week will continue tomorrow when Microsoft PFE, Norman Drews, talks about using Windows PowerShell to fix annoying NDR mails.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Fix Those Annoying NDR Emails

$
0
0

Summary: Microsoft PFE, Norman Drews, shows how to use the Active Directory cmdlets and the Exchange cmdlets to clean up distribution groups.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome our guest blogger, Norman Drews. Norman is a Microsoft PFE, and I had the privilege to meet with him when I was speaking at Geek Ready last year in San Diego. While I was out there, I also took the opportunity to encourage him to write a few guest blogs for us. Here is the first one. Now, let’s see what he has to say about himself…

Photo of Norman Drews

At Microsoft, I am a developer premier field engineer. I am part of a Microsoft group that supports our premier customers with issues in their environment both onsite and remotely. I handle application profiling, user-mode debugging, application compatibility, and currently the IT environment by way of Windows PowerShell. I live in the Houston area, and I have a beautiful wife and two kids. I speak a few languages (English, German, and Spanish) and enjoy being on the computer, watching television, and spending time with the family. The usual stuff. I started a blog after customers said that I should post my scripts online. It is a little empty right now, but I plan to add useful scripts over time.
Blog: $AutomateThisAndThat = $true with Windows PowerShell

Recently, I have been doing many Windows PowerShell workshops to help spread the "word.” I am often surprised that there are so many things that could get automated quickly and easily with Windows PowerShell, but people don’t do it. Perhaps it is just not being aware of how easy it can be.

I was teaching a Windows PowerShell class a couple months back, and we had just finished a day of fundamentals. The problem with fundamentals is that they are not applied. Some of the students started saying how they have over 50000 exchange mailboxes and 10000 distribution groups that they had to clean up. They were getting tons of non-delivery report (NDR) emails because it was not uncommon to have several mailboxes disconnected or removed on a daily basis. Their current process was to open each user account and manually remove it from active distribution groups. Sure, they could have been using dynamic distribution groups, but that was not the case here. So, I said, “Let us automate the process!”

After a few moments, we came up with a couple approaches:

  • Proactive: Enumerate all distribution groups and check if the mailbox is still valid (based on criteria)
  • Reactive: Wait for the NDR and then remove the user from the distribution groups

I created my MyTestGroup, and I added a few users including John Doe who has a ProhibitSendReceiveQuota = 0.

The following image shows MyTestGroup:

 Image of command output

Here is the mailbox information for John Doe:

Image of command output

Active Directory and Exchange cmdlets to use

Following are the Active Directory cmdlets and the Exchange cmdlets that we will use in the scripts.

Get-ADUser

Get-ADGroup

Get-ADGroupMember

Remove-ADGroupMember

Get-Mailbox

Get-MailboxStatistics

For more information about using the Active Directory cmdlets, refer to these Hey, Scripting Guy! blogs.

For more information about using the Exchange commands, see the following blogs in Hey, Scripting Guy!

Proactive approach: Enumerate all distribution groups

Here is the basic idea:

  1. Get users from all distribution groups, even the nested ones (-Recursive).
  2. Keep track of processed users so we don’t work too hard.
  3. Check if the users’ mailbox’s are disconnected, non-existent, or have a ProhibitSendReceiveQuota set to 0.
  4. Loop through their mail-enabled group memberships and remove them.

You probably would not want to run the following script every hour on-the-hour, but you could set it up as a scheduled task to run daily or weekly. For more information about running Windows PowerShell scripts via a scheduled task, see these Hey, Scripting Guy! blogs.

param

(

    $Path = "ou=myTestOU,dc=exgcore,dc=lab"

)

 

# Import AD Module

Import-Module -Name ActiveDirectory

 

# Set up array to track processed users

$UsersAlreadyChecked = @()

 

# Loop through all Groups at a specific location

foreach($Group in (Get-ADGroup -Filter * -SearchBase $Path -SearchScope onelevel))

{

    # Retrieve all group members including those in nested groups

    $GroupMembers = Get-ADGroupMember -Identity $Group.DistinguishedName -Recursive

   

    foreach($GroupMember in $GroupMembers)

    {   

        # Store the DistinguishedName

        $MemberDN = $GroupMember.DistinguishedName

   

        # Make sure we haven't already processed this user

        if($UsersAlreadyChecked -notcontains $MemberDN)

        {

            # Grab Mailbox information

            $MailBox = Get-Mailbox -Identity $MemberDN

            $MailBoxStats = Get-MailBoxStatistics -Identity $MemberDN

           

            # Set removal conditions

            if(($MailBox -eq $null) -or (($MailBoxStats.DisconnectDate) -or ($MailBox.ProhibitSendReceiveQuota -eq 0)))

            {

                # Get group memberships for the current user

                $User = Get-ADUser -Properties memberof -Identity $MemberDN

                $UserDN = $user.DistinguishedName

 

                # Loop through those groups and remove from mail-enabled ones

foreach($ADGroupDN in $User.memberof)

{

# Assume mail enabled since there is a populated mail attribute

$Group = Get-ADGroup -Properties mail -Identity $ADGroupDN

if($Group.mail -like "*")

{

# Remove the user from the group defined in $ADGroupDN

Remove-ADGroupMember -Identity $ADGroupDN -Members $UserDN

}

}

# Keep track of user so we don't check him again

$UsersAlreadyChecked += $UserDN

 

               # Keep track of user so we don't check him again

               $UsersAlreadyChecked += $UserDN

               

            }

        }

    }

}

 The following image shows the current group members. I run the previous script saved to EnumerateGroupsAndRemove.ps1, and show the members once more. We should see that John Doe is removed. (We can ignore the unrelated warning message. It is for a different user.)

Image of command output

Note   Remove-ADGroupMember will probably prompt you for confirmation, so if you really want to remove the user and suppress the query, add -Confirm:$false to the end of the statement.

Reactive approach: Wait for the NDR and act

The idea here is similar, except we are responding to an NDR and then cleaning up that user’s memberships. We do the following:

  1. Call Get-ADUser and retrieve the user’s group memberships via the MemberOf attribute.
  2. Enumerate through those groups and check if they are mail-enabled.
  3. Remove the user from the mail-enabled group.

 

param

(

    $UserDN = "cn=John Doe,ou=myTestOU,dc=exgcore,dc=lab"

)

 

# Import AD Module

Import-Module -Name ActiveDirectory

 

# Get current group memberships for the current user

$User = Get-ADUser -Properties memberof -Identity $UserDN

 

# Loop through those groups, specifically those that are mail-enabled

foreach($ADGroupDN in $User.memberof)

{

# Assume mail enabled since there is a mail attribute

$Group = Get-ADGroup -Properties mail -Identity $ADGroupDN

if($Group.mail -like "*")

{

# Remove the user from the group defined in $ADGroupDN

Remove-ADGroupMember -Identity $ADGroupDN -Members $User.DistinguishedName

}

}

Now I am going to show the current group members. I run the previous script saved to CleanupUserDistributionGroups.ps1, and show the members once more. We should see that John Doe gets removed.

 Image of command output

Voila!

With little modification, this script can be written to accept multiple users. It currently processes a single user.

As you can see, it is not really all that difficult. We wrote the scripts during a classroom break—about 20 minutes of FAST typing and TAB completion, and it saved the customer tons of tedious future work. I did not have an Exchange Server environment set up, but the students were Exchange administrators, and they set up a test environment to confirm that it worked for them.

In the previous scripts, I used the DistinguishedName Active Directory attribute, but we can basically use anything that uniquely identifies the Active Directory object. In addition, I assumed that the Exchange snap-in and Active Directory module (download RSAT) is loaded. If not, you need to load them via Add-PSSnapin and Import-Module respectively.

Have fun, comment your code, and test it before production deployment!

~Norman

Thank you, Norman. This is an excellent real-world type of scenario.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Why Should I Learn PowerShell? Real World Example Saves the Day!

$
0
0

Summary: Guest blogger, Tim Bolton, shares a real world story that brings home the need to learn Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here, and it’s time for the Weekend Scripter. Today we have a guest blogger, Tim Bolton.

Photo of Tim Bolton

Tim Bolton has been involved in the IT Pro community for over 18 years. Tim received the Microsoft Certifications MCITP Enterprise Administrator on Windows Server 2008 and MCTS Microsoft Exchange Server 2010, Configuration. Tim is currently working as a Microsoft consultant and systems engineer at Software Solutions. Tim and his family have recently relocated to the Dallas/Fort Worth area.

Blog: Tim Bolton – MCITP – MCTS, Sharing information about IT and things I have broken, fixed, and seen 
Twitter: @jsclmedave
LinkedIn: Tim Bolton

Take it away Tim…

Working within a large corporate environment can produce some strange IT issues. Often these issues demand immediate attention and answers, usually with a manager hovering over you.

Recently while I was working in one of those large corporate environments, which had a large network that had just inherited over a thousand servers, a security issue was raised. Each of the production servers are required to have RSA installed and running to ensure secure log ons.

Because this was one of those scenarios where the servers were recently inherited, it proved to be extremely difficult to provide an accurate answer for, “Can you confirm that the RSA service is running on all of the servers within our scope?”

SCCM was not an option at this time and management wanted an answer “now,” so there was not time to open a service ticket for assistance. Logging on to over a thousand servers to verify if the RSA service was running was obviously not an option, and other third-party applications were not at our disposal.

Windows Powershell to the rescue!

With guidance from Windows Powershell MVP, Claus Nielsen, who happened to be online and available, I was able to provide the answer that was needed in a timely manner. I already had a text file (MyServers.txt) with all of the servers in our scope listed, so that part was already accomplished.

The service that the RSA application ran as is OASVC_Local. I simply needed to check for that service on each server in the text file.

Using the appropriate credentials, I pulled the servers from the text file by using Get-Content, and I placed them into a variable that I called $Servers. Here is the command I used.

$Servers = Get-Content "C:\Temp\MyServers.txt"

Now, using the Foreach command, I started checking for the service on the remote servers. The Foreach command will loop through each server in the $Servers variable. Each loop will use the variable $Server which will contain one item from the variable $Servers list of servers. By using the variable $Server (the computer name in this case) and Get-WMIobject to connect to remote servers, the script will query for the service OASVC_Local, and populate the variable $Service with a "Service" object that will contain the properties for each object. Here is the command I arrived at using.

Foreach ($Server in $Servers)

{$Service = Get-WMIobject win32_service -computername $Server -Filter "Name = 'OASVC_Local'"

I also want to determine if the service is running or not. I query if the service is running by using the If statement. If the state equals “running,” then write "RSA is Running for $Server" to a file called RSA.txt, and also write it to the console screen. I wanted to have the results appear on the console screen for quick viewing, in addition to a text file that I could use as a simple search for the key word “NOT” to identify servers that needed attention. (Note: NOT running -BackgroundColor red.) Here is that portion of the script.

If ($Service.state -eq "running")

 {

Write-Output "RSA is Running for $Server" | out-file -append C:\Temp\RSA.txt

Write-Host "RSA is Running for $Server”

} else

 {

Write-Output "RSA is NOT Running for $Server" | out-file -append C:\Temp\RSA.txt

Write-Host "RSA is NOT Running for $Server" -BackgroundColor red

}

}

The following image shows the commands that I typed and the output that I received in the Windows PowerShell console.

Image of command output

Here is the script in its entirety. Simple, quick—and it provided me with the information that I needed in seconds. I actually found fewer than 10 servers out of a thousand, that needed to be checked as to why the RSA service was not running. This was a manageable task, which was corrected before the lunch hour.

$Servers = Get-Content "C:\Temp\MyServers.txt"

Foreach ($Server in $Servers)

{$Service = Get-WMIobject win32_service -computername $Server -Filter "Name = 'OASVC_Local'"

If ($Service.state -eq "running")

 {

Write-Output "RSA is Running for $Server" | out-file -append C:\Temp\RSA.txt

Write-Host "RSA is Running for $Server"

} else

 {

Write-Output "RSA is NOT Running for $Server" | out-file -append C:\Temp\RSA.txt

Write-Host "RSA is NOT Running for $Server" -BackgroundColor red

}

}

Without guidance and explanation from Windows PowerShell MVP, Claus Nielsen, this would have taken “me” most of the day to configure—time that I did not have. This is yet another “real world” reason that seems to iterate throughout the IT community about learning Windows PowerShell. How many of us can reach out to a Windows PowerShell MVP that “may” be available to assist you with management’s emergencies? I was lucky today, but what will my managers ask tomorrow?

If you are an IT administrator, I would highly suggest that you become familiar with Windows PowerShell. It may save the day and make you out to be the “go to” person when an issue arises. This may also come into play when reviews are due. Harness the power of Windows PowerShell to show your company why you and your skills stand out above the others.

~Tim

Thank you, Tim. This is a great real world example of using Windows PowerShell to save the day! It is also a tribute to the amazing Windows PowerShell community.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Multiply PowerShell Strings to Customize Text Underlines

$
0
0

Summary: Create a Windows PowerShell function that accepts pipelined input and creates a variable length underline that uses various characters.

Microsoft Scripting Guy, Ed Wilson, is here. One of the things I really enjoy doing is reading the comments that are posted by various people on the Hey, Scripting Guy! Blog. Although I do not respond to every comment, I do read them, and I would like to respond to all the comments, but I have a tendency to get bogged down at times, and it precludes me from responding to comments.

Anyway, a little over a month ago, I wrote a pretty cool blog: Use PowerShell and ASCII to Create Folders with Letters. This blog generated a decent number of comments. A comment by JV mentioned that you could multiply letters. This is a cool trick that I have used for years. To multiply strings, you use the multiplication operator shown in the code that follows.

“a” * 5

It is also possible to multiply longer strings. This technique is shown here.

"This is a string" * 2

One of the really cool things that I use the string multiplication trick to do is to create an underline that is exactly the same length as the string it highlights. To do this, I use the Length property of the string and supply that to the multiplication operator along with the desired line separator to use. In the code that follows, I assign a string to the variable $a. Next, I use the Length property, which is a property that always exists on System.String objects, to determine the length of the string. I use the length of the string to determine how many times I want to multiply the underscore character (“_”). Next I display the string, and finally, I display the newly created underline. The code is shown here.

$a = "this is a string"

$b = "_" * $a.length

$a

$b

As a further test of this technique, I create a longer string, calculate the length of the new string, create a new underline character, and once again display the output. This is shown in the image that follows.

Image of command output

When you know that you can multiply strings and use the technique to create a custom-sized underline, the next step is to turn it into a simple function. The complete New-Underline function appears here.

Function New-Underline

{

 [CmdletBinding()]

param(

      [Parameter(Mandatory = $true,Position = 0,valueFromPipeline=$true)]

      [string]

      $stringIN,

      [string]

      $char = "_"

 )

  $underLine= $char * $stringIn.length

  $stringIn

  $underLine

} #end function new-underline

The first thing I do is tell the function to use the CmdletBinding attribute, this tells the function to behave as if it was a cmdlet when processing parameters, and excess arguments passed to the function that do not have defined parameters generate an error. Next, to create the input parameters, I use the Param keyword. I then specify parameter attributes to make the first parameter mandatory, identify it as position 0, and accept values passed along from the pipeline. These first few lines of code are shown here.

Function New-Underline

{

 [CmdletBinding()]

param(

      [Parameter(Mandatory = $true,Position = 0,valueFromPipeline=$true)]

Next, the $stringIn variable is specified to be a string, as is the $char variable. The $stringIn variable holds the input that is passed to the function (the input to be underlined), and the $char variable holds the character to use for the underlining. The $char variable is assigned a default value of “_”. This portion of the function is shown here.

      [string]

      $stringIN,

      [string]

      $char = "_"

 )

The main part of the function uses the string multiplication technique mentioned earlier. The code retrieves the value to use for the underline from the $char variable and multiplies it by the length of the string. The $underline variable stores the newly created underline. Next the values contained in the $stringIn and the $underLine variables return from the function. This portion of the function is shown here.

  $underLine= $char * $stringIn.length

  $stringIn

  $underLine

} #end function new-underline

I obtain the full path to the script that contains the New-UnderLine function by using the Windows PowerShell ISE object model. Here is the code I used to easily retrieve the path.

$psise.CurrentFile.FullPath

I copy and paste that into the Windows PowerShell console, and then I use dot-sourcing to bring the function into the current Windows PowerShell environment. Here is the code I used to do that (the period is the first character on the line, followed by a space, then the path to the script).

. E:\data\ScriptingGuys\2012\HSG_1_23_12\new-underline.ps1

I then test the function to ensure that it works properly. The image that follows shows that the function works great.

Image of command output

Well, that is about all for now. I hope you enjoy the rest of your weekend. Keep on scripting!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Scripting Guys Announce the 2012 PowerShell Scripting Games

$
0
0

Summary: The Microsoft Scripting Guys formally announce the 2012 Scripting Games featuring Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, here. It must be spring—at least in the Charlotte, North Carolina area in the United States. Last week, I had the 2012 Scripting Games kick off meeting with management, and I am really looking forward to this year’s games. Dr. Scripto is finally thawing out from the snowy winter weather, and he is ready for spring and the games. (Thanks to my editor, Dia Reeves in Redmond Washington, for the picture.)

Photo of Scripting Guy

In fact, I have been quietly planning for several months now. There will be two categories: Beginner and Advanced. Just like last year, the beginner’s division is really for beginners. If you have been wanting to learn Windows PowerShell, but you did not know how to go about it, you definitely need to sign up for the 2012 Scripting Games to give yourself a bit of added incentive. If you want to take your skills to the next level, you will want to participate in the advanced category.

In a major departure from previous rules, if you want to compete by using the beta version of Windows PowerShell 3.0, you are welcome to use it. You will be limited to the most recent version that is publically available, and you must specify that when you submit your scripts.

I will be putting together the finishing touches on the Study Guide for the 2012 Scripting Games to help you in your study and preparation for the games. The Study Guide will be available on February 5, 2012. This guide will be useful for both beginning and advanced scripters. In the meantime, you should check out the study guide for the 2011 Scripting Games and the events from the 2011 Scripting Games. You might also want to review the study guide for the 2010 Scripting Games, in addition to the events from the 2010 Scripting Games.

Even if you are not planning to enter the 2012 Scripting Games (and I cannot fathom why because they are fun, free, and functional) the materials from the 2010 and 2011 Scripting Games combine to create a great hands-on lab for learning Windows PowerShell scripting.

The 2012 Scripting Games will officially launch on Monday April 2, 2012. Just like last year, things will be a bit different from a Windows PowerShell perspective, and I intend to put scriptwriters of all levels of accomplishment through the paces of ten real-world-based scenarios.

The second obvious thing to mention is that the events are live a week before the answers will be available for viewing. This means that Event 1 reveals on Monday April 2, 2012, and you will be able to submit your answer on that same date. However, no one will be able to view the answers that are submitted to PoshCode for Event 1 prior to Monday April 9, 2012. On Monday April 9, 2012, submissions for Event 1 cease. This adds to the mystery of the event and heightens suspense.

There are probably a dozen tweaks and improvements to the 2012 Scripting Games, but the essential elements are unchanged. We are continuing our partnership with PoshCode this year because I think it is cool to collaborate with Microsoft MVPs, and I am all about community participation. Besides all that, the PoshCode people are fun to work with. 

What else do I need to say? Oh yeah, how about prizes? There will be awesome prizes this year…stay tuned for that announcement. How about celebrity judges, expert commentators, and other stuff? Check, check, and check!

One last thing…

On Twitter, I will be using the tag #2012SG, so it will be easy to filter out tweets related to the games. Be sure to check out the Scripting Guys Facebook page too.

I hope you are as excited about the 2012 Scripting Games as I am—they will be awesome!

Oh wait!

The Scripting Wife just told me she will enter the 2012 Scripting Games again this year (of course, because she is the Scripting Wife, she is not eligible for any prizes, and her standing in the games does not affect the standing of anyone else). Because she is participating in the games, you should expect to see some more Scripting Wife blogs popping up.

I would love you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Beat the Auditors, Be One Step Ahead with PowerShell

$
0
0

Summary: Microsoft PFE, Georges Maheu, opens his security assessment toolbox to discuss a Windows PowerShell script he uses to look at Windows services.

Microsoft Scripting Guy, Ed Wilson, is here. Our guest blogger today is Georges Maheu. Georges is one of my best friends, and I had a chance to work with him again during my recent Canadian tour. While I was there, I asked him about writing a guest series, and we are fortunate that he did (you will love his four blogs). Here is a little byte about Georges.

Photo of Georges Maheu

Georges Maheu, CISSP, is a premier field engineer (PFE) and security technology lead for Microsoft Canada. As a senior PFE, he focuses on delegation of authority (how many domain admins do you really have?), on server hardening, and on using his scripts to perform security assessments. Georges has a passion for Windows PowerShell and WMI, and he uses them on a regular basis to gather information from Active Directory and computers. Georges also delivers popular scripting workshops to Microsoft Services Premier Support customers. Georges is a regular contributor to the Microsoft PFE blog, OpsVault.

Note: All the scripts and associated files for this blog series can be found in the Script Repository. Remember to unblock and open the zip file.

Take it away, Georges…

When premier customers call me, it is either because they have been compromised or because they are being audited. In either case, the first step is always the same: make an inventory. To assess risk, you need to understand what you are managing. Many customers have fancy gizmos to detect intrusion, but they often fail to look at the most simple things.

Hey, if the barn doors are wide open, why look for mice holes!

Image of barn

So, what does all of this have to do with auditors? Well, auditors (read accountants) have investigated hundreds of attacks and concluded that several of them were done by using generic accounts as penetration vectors. Therefore, being accountants (read very organized); some auditors are now assessing generic accounts as part of their auditing.

Here is a common scenario: An IT Pro created a generic account in 1999 to perform backups. Back then, the backup software required this service account to be a domain admin account. The password policy was six characters without complexity. Today, the account is still active; the password has never been changed.

Auditors come along and provide you with a list of accounts with “no password required” and a list of accounts with obsolete passwords. Your manager tells you to investigate where these accounts are being used and wants you to create a remediation plan to change the passwords on these accounts.

Generic accounts can be used by people (guest and administrator are two examples), used by applications, or used to run services. My security tool box contains several small scripts, an approach that I prefer to a Swiss army tool that tries to do everything.

Today, we will look at one of my tool box scripts. This script will make an inventory of all services running on your computers and will identify which ones are using non-standard service accounts. The inventory is presented in an Excel spreadsheet (accountants and managers love spreadsheets, so this is a bonus).

When you write scripts, always consider two factors: complexity and capacity (or performance). This first version of the script is linear and simple, but slow; it will do the work for a small to mid-size environment. Depending on your environment, it may take up to 15 minutes per server.

The first step is to read the list of targeted computers. You could get this list directly from Active Directory, but most people prefer to have control over the computers that are being queried. In any case, a simple Windows PowerShell command can be used to create such a list:

([adsisearcher]"objectCategory=computer").findall() |

   foreach-object {([adsi]$_.path).cn} |

   out-file -Encoding ascii -FilePath computerList.txt

Check the results with:

get-content computerList.txt

Now that a list of computers has been created, let’s open the tool box and look at portions of this first version script. Not always knowing where the script is run from, the path is stored programmatically in the $scriptPath variable instead of being hard-coded. Because performance is important, time is also tracked.

clear-host

$startTime = Get-Date

$scriptPath = Split-Path -parent $myInvocation.myCommand.definition

Creating an Excel spreadsheet is done with office automation. The DisplayAlert property is set to $false to avoid the file-overwriting prompt when the file is saved at the end of the script. If the Visible property is set to $true, make sure not to click inside an Excel spreadsheet while the script is running because this will change the focus and generate a Windows PowerShell error.

$excel = New-Object -comObject excel.application

$excel.visible = $true # or = $false

$excel.displayAlerts = $false

$workBook = $excel.workbooks.add()

Let’s move on by initializing our environment. The computer list is stored in an array automatically with the Get-Content cmdlet, as shown here:

$computers = Get-Content "$scriptPath\Computers.txt"

$respondingComputers = 0 #Keep count of responding computers.

The properties returned by a WMI query are not necessarily in the desired order. To simplify the layout, variables that will be used to define the columns for the data in Excel are initialized here. Notice the service Name and StartName are in the first two columns. The presentation order can be rearranged based on specific requirements.

$mainHeaderRow = 3

$firstDataRow = 4

 

$columnName = 01

$columnStartName = 02

$columnDisplayName = 03

#removed some properties for brevity.

$columnSystemName = 24

$columnTagId = 25

$columnTotalSessions = 26

$columnWaitHint = 27

Names are then given to individual Excel spreadsheets and headers are added. The first sheet is used to keep statistics; the second one is really the core component of this report where all the exceptions will be recorded. These will be the accounts that the auditors will want you to manage.

$workBook.workSheets.item(1).name = "Info"  #Sheets index start at 1.

$workBook.workSheets.item(2).name = "Exceptions"

#Delete the last sheet as there will be an extra one.   

$workBook.workSheets.item(3).delete()

 

$infoSheet = $workBook.workSheets.item("Info")

$infoSheet.cells.item(1,1).value2 = "Nb of computers:"

$infoSheet.cells.item(1,2).value2 = $($computers).count

$infoSheet.cells.item(1,2).horizontalAlignment = -4131 #$xlLeft

 

$exceptionsSheet = $workBook.workSheets.item("Exceptions")

$exceptionsSheet.cells.item($mainHeaderRow,1) = "SystemName"

$exceptionsSheet.cells.item($mainHeaderRow,2) = "DisplayName"

$exceptionsSheet.cells.item($mainHeaderRow,3) = "StartName"

$exceptionsSheet.cells.item($mainHeaderRow,1).entireRow.font.bold = $true

A nice trick-of-the-trade is to write a message that might get overwritten. In this case, if data is found, this line gets overwritten by the data.

#The next line will be overwritten if exceptions are found.

$exceptionsSheet.cells.item($firstDataRow,1) = "No exceptions found"

The script can now start to process each computer. A new Excel tab is created for each computer and renamed with the computer name, a header is written, WMI is used to get the data, and the data is written to the Excel spreadsheet. Notice that the current spreadsheet is selected with $computerSheet.select(). This is not required, but it will show some activity on the screen. Again, do not click in the spreadsheet because this will change the selected cell and generate an error.

forEach ($computerName in $computers)

    {

    $computerName = $computerName.trim()

    $workBook.workSheets.add() | Out-Null

    "Creating sheet for $computerName"

    $workBook.workSheets.item(1).name = $computerName

    $computerSheet = $workBook.workSheets.item($computerName)

    $computerSheet.select() #Show some activity on screen.

 

    $error.Clear()

    $services = Get-WmiObject win32_service -ComputerName $computerName

The script performs very basic error handling. If no errors are found by the WMI query, data is written to the spreadsheet; otherwise, the error is logged in a text file. The order in which the data is written is irrelevant because variables are used to indicate in which column each field is being written.

    if (($error.count -eq 0))

        {

        Write-Host "Computer $computerName is responding." -ForegroundColor Green

        $row = $firstDataRow

        #Write headers

        $computerSheet.cells.item($mainHeaderRow,$columnCaption) = "Caption"

        #removed some properties for brevity.

        $computerSheet.cells.item($mainHeaderRow,$columnWaitHint) = "WaitHint"

       

        $computerSheet.cells.item($mainHeaderRow,1).entireRow.font.bold = $true

       

        forEach ($service in $services)

            {

            $service.displayName

            $computerSheet.cells.item($row,$columnAcceptStop) = $service.AcceptStop

            $computerSheet.cells.item($row,$columnCaption) = $service.Caption

            $computerSheet.cells.item($row,$columnCheckPoint) = $service.CheckPoint

            #removed some properties for brevity.

            $computerSheet.cells.item($row,$columnWaitHint) = $service.WaitHint

          

            $row++

The following section is where the crucial action occurs. The non-generic service accounts are identified here. This is done with an if statement and by writing the exceptions to the exception sheet.

################################################

            # EXCEPTION SECTION

            # To be customized based on your criteria

            ################################################

            if (       $service.startName -notmatch "LocalService" `

                -and $service.startName -notmatch "Local Service" `

                -and $service.startName -notmatch "NetworkService" `

                -and $service.startName -notmatch "Network Service" `

                -and $service.startName -notmatch "LocalSystem" `

                -and $service.startName -notmatch "Local System")

                {

                $exceptionsSheet.cells.item($exceptionRow,1) = $service.systemName

                $exceptionsSheet.cells.item($exceptionRow,2) = $service.displayName

                $exceptionsSheet.cells.item($exceptionRow,3) = $service.startName

                $exceptionRow++

                } #if ($service.startName

            } #forEach ($service in $services)

Then, the current spreadsheet is formatted and the responding computer counter is incremented.      

        $computerSheet.usedRange.entireColumn.autoFit() | Out-Null

        $respondingComputers++

        }

If the remote computer does not respond, a comment is added to the spreadsheet and additional information is provided in a log file.

    else #if (($error.count -eq 0))

        {

        $computerSheet.cells.item($firstDataRow,1) = "Computer $computerName did not respond to WMI query."

        $computerSheet.cells.item($firstDataRow+1,1) = "See $($scriptPath)\Unresponsive computers for additional information"

 

        $error.Clear()

        Test-Connection -ComputerName $computerName -Verbose

        Add-Content -Path "$($scriptPath)\Unresponsive computers" -Encoding Ascii `

                             -Value "$computerName did not respond to win32_pingStatus"

        Add-Content -Path "$($scriptPath)\Unresponsive computers" -Encoding Ascii `

                             -Value $error[0]

        Add-Content -Path "$($scriptPath)\Unresponsive computers" -Encoding Ascii `

                             -Value "----------------------------------------------------"

        Write-Host "Computer $computerName is not responding. Moving to next computer in the list." `

                          -ForegroundColor red

        } #if (($error.count -eq 0))

    } #forEach computer

At this point, the script is almost done. The information sheet needs to be updated with our stats, and a few things need to be formatted and cleaned up before exiting.

$exceptionsSheet.usedRange.entireColumn.autoFit() | Out-Null

 

$infoSheet.cells.item(2,1).value2 = "Nb of responding computers:"

$infoSheet.cells.item(2,2).value2 = $respondingComputers

$infoSheet.cells.item(2,2).horizontalAlignment = -4131 #$xlLeft

$infoSheet.usedRange.entireColumn.autoFit() | Out-Null

 

$workBook.saveAs("$($scriptPath)\services.xlsx")

$workBook.close() | Out-Null

 

$excel.quit()

Com objects require special attention; otherwise, they remain in memory and do not release the calling program. Some coders keep track of all the objects that they create, and release them one by one. I prefer to use a generic approach such as:

#Remove all com related variables

Get-Variable -Scope script `

    | Where-Object {$_.Value.pstypenames -contains 'System.__ComObject'} `

    | Remove-Variable -Verbose

[GC]::Collect() #.net garbage collection

[GC]::WaitForPendingFinalizers() #more .net garbage collection

Now that the script is done, let us see how long it takes to run. I always do benchmarks with a limited number of computers, and then estimate how long it will take to gather data from all the targeted computers.

$endTime = get-date

 

"" #blank line

Write-Host "-------------------------------------------------" -ForegroundColor Green

Write-Host "Script started at:   $startTime" -ForegroundColor Green

Write-Host "Script completed at: $endTime" -ForegroundColor Green

Write-Host "Script took $($endTime - $startTime)" -ForegroundColor Green

Write-Host "-------------------------------------------------" -ForegroundColor Green

"" #blank line

A test run in my home lab gathered data from 50 computers in 90 minutes. That is less than two minutes per computer. Not bad, considering we have a nice Excel spreadsheet as a final report.

Image of command output

Having an inventory is the first step in closing one of the barn doors.

Image of barn

The next screen capture shows all the services that are using nonstandard service accounts.

Image of spreadsheet

This information helps to evaluate the impact of changing the password on the SRVAccount and BobTheGreat before the auditors start asking. This is also a good opportunity to determine if that xyz service really needs to run as Administrator.

I ran this script in some customer environments where it took more than 15 minutes per computer. This could be a serious limiting factor. Tomorrow, we will explore how the performance of this script can be dramatically improved from 90 minutes to less than three minutes.

Remember, all the scripts and files for this and for the next three blogs in the series can be found in the Script Repository. Simply open the zip file.

~Georges

Thank you, Georges, for the first installment of an awesome week of Windows PowerShell goodness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Speed Up Excel Automation with PowerShell

$
0
0

Summary: Microsoft PFE, Georges Maheu, optimizes the Windows PowerShell script he presented yesterday.

Microsoft Scripting Guy, Ed Wilson, is here. Our guest blogger today is Georges Maheu. Georges presented a script yesterday to gather Windows services information in an Excel spreadsheet. Although the script did an adequate job for a small number of computers, it does not scale well for hundreds of computers.

Note: All of the files from today, in addition to files for the entire week are in a zip file in the Script Repository. Georges’ team blog can be found at PFE Blog OpsVault.

Take it away Georges...

Today, we look at scaling up yesterday’s script. As seen, the script makes an inventory of all services running on your servers and identifies those that are using non-standard accounts. The inventory is presented in an Excel spreadsheet.

The first version of the script was linear and simple but somewhat slow; it works well for a small to mid-size environment, but it does not scale well for larger environments.

When doing security assessments, most of the time, only a subset of computers is considered. The objective of an assessment is to locate a few examples of items to correct, not to make a complete inventory. However, when doing remediation planning, a complete inventory is required to evaluate the potential impact of applying modifications to the environment. The script presented yesterday is fine for doing assessments or complete inventory of smaller environments. To make a complete inventory of a larger environment, the script performance needs to be optimized.

Unfortunately, I am not aware of profiler tools for Windows PowerShell. A profiler is a tool that enables you to compute the time spent in each line of code when running a script. There are other ways of estimating where most of your script spends its time. One of them is to create your own timers and track time at various locations.

Image of command output

Another option is to use the Measure-Command cmdlet.

Image of command output

After you instrument the original script, the following statistics are generated:

WMI query: ~ 0.25 seconds for responding local computer

WMI query: ~ 1.00 seconds for responding remote computer

WMI query: ~ 20.00 seconds for non-responding computer

Ping dead computer:    ~2.5 seconds

Writing sheet headers: ~ 1.5 seconds

Writing sheet data:      ~ 120 seconds

The two main pain points are making a WMI query on a non-responding , which takes about 20 seconds, and writing the data to Excel. If the target computers are servers, these should be few and far apart. However, if these are workstations, investing more time may be needed to address the non-responding computer issues.

In any case, the big gain is obviously with writing data to Excel. After doing some research, the fastest way to write to Excel seems to be to write a range of data rather than writing it cell by cell. Several techniques could be used to achieve this data transfer. Let’s explore two of them…

First method: CSV files

The result of the WMI query can be exported into a CSV file with the Export-CSV cmdlet, and then read directly from Excel with the $excel.workbooks.open() method. Although this looks simple, there is some gymnastics and plumbing involved to get this working. We will look into this further later this week.

Second method: Clipboard

The result of the WMI query can be copied to the clipboard and then pasted into each spreadsheet. This method also avoids having to write the spreadsheet header.

The first step is to load the clipboard class:

Add-Type -Assembly PresentationCore

Warning: The PresentationCore class is pretty fussy about its housing. It needs to know in which state its apartment is!

if ($host.Runspace.ApartmentState -notlike "STA")

    {

    powershell -STA -file $myInvocation.myCommand.definition

    }

Then the original code is modified with the following lines:

$services = Get-WmiObject win32_service `

                 -ComputerName $computerName

 

if (($error.count -eq 0))

    {

    $data = ($services `

        | Select-Object  $properties `

        | ConvertTo-Csv -Delimiter "`t" -NoTypeInformation) `

        -join "`r`n"

               

    [Windows.Clipboard]::setText($data)

 

    #Const xlPasteAll = -4104

    $computerSheet.range('a1').pasteSpecial(-4104)

    [Windows.Clipboard]::setText(””) #clear the buffer

    $computerSheet.usedRange.entireColumn.autoFit() `

         | Out-Null

. . .

The WMI query is the same as before. The new element is the $data transformation. This step is required to allow pasting the services data to an appropriate format for Excel. If $services is pasted directly as follows:

[Windows.Clipboard]::setText($services)

Only partial data is copied and not parsed into columns. Excel understands the CSV format; therefore, the $service data is converted into the CSV format with the ConvertTo-CSV cmdlet. The CSV data is then converted into one big string with the –join method. Et voilà, the script is done. Well almost…

The static method SetText from the Windows.Clipboard class is used to copy data to the clipboard and then the office automation PasteSpecial method is used to paste it into Excel.

A neat trick that my good friend, Ed Wilson, showed me some time ago is how to find static methods with the Get-Member cmdlet. If [Windows.Clipboard] is sent to get-member, only a subset of the methods available is obtained. But if the Static parameter is used; the following additional methods will be displayed:

Image of command output

The Select-Object cmdlet is used to change the order in which the columns are output. If you remember, in the original script, variables like $columnCaption were used to determine column position.

$computerSheet.cells.item($row,$columnCaption) = $service.Caption

In this script, an array combined with the Select-Object cmdlet is used. Thereby, another modification has to be made:

$properties = `

                "Name",`

                "StartName",`

                #removed some properties for brevity

                "WaitHint"

Using an array this way avoids printing the headers as needed in the previous version of the script. To change the column order, simply change the order of the properties in the array.

As you may recall, a test run in my home lab gathered data from 50 computers in 90 minutes with the original script. This new and improved version does the same 50 computers in less than three minutes, and it is 43 lines shorter!!

Image of command output

But wait, there is more…

The next challenge will be to scale this script from 50 servers to thousands of servers. Tomorrow, we will explore how we can again dramatically improve the performance of this script from three minutes down to 45 seconds by leveraging some Windows PowerShell 2.0 features.

~ Georges 

Once again, thank you, Georges, for sharing with us today. The zip file you will find in the Script Repository has all the files and scripts from Georges this week. Please join us tomorrow for Part 3 in the series.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerShell Saturday Sign-Up Announced

$
0
0

It is official. I will be speaking at the Columbus, Ohio PowerShell Saturday community event. This is an all day event, and I will be presenting a Beginner track all day. There will also be an Advanced track with such Windows PowerShell luminaries and Microsoft PFE’s like Ashley McGlone and Brian Jackett. The event is sponsored by the Central Ohio PowerShell Users Group, and we have worked closely with the group president, Wes Stahler.

The event will be held at the Microsoft office in Columbus, Ohio, and the seating is limited to the first 100 people who sign up. Details are being added continuously, so check the PowerShell Saturday event and sign-up page regularly.

Speed Up Excel Automation with PowerShell Jobs

$
0
0

Summary: Microsoft PFE, Georges Maheu, further optimizes the Windows PowerShell script that he presented in his previous two blogs.

Microsoft Scripting Guy, Ed Wilson, is here. Our guest blogger today is Georges Maheu. Georges presented a script two days ago to gather Windows services information in an Excel spreadsheet: Beat the Auditors, Be One Step Ahead with PowerShell

Although that script did an adequate job for a small number of servers, it did not scale well for hundreds of servers. Yesterday, he optimized the script to reduce the runtime from 90 minutes to less than three minutes: Speed Up Excel Automation with PowerShell

Today, Georges wants to do even better. 

Note: All of the files from today, in addition to files for the entire week are in a zip file in the Script Repository. You can read more from Georges on the PFE Blog: OpsVault.

Take it away Georges...

Today, we will see if yesterday’s script can be optimized further. As already seen, the script makes an inventory of all services running on your servers, and it identifies services that are using non-standard accounts. The inventory is presented in an Excel spreadsheet.

The first version of the script was linear and simple but somewhat slow. Here is a screen capture of the WTM while the script is running. The script took 90 minutes to perform a 50 computer inventory. CPU on the test computer remained in the 20% utilization range during this test.

Image of performance data

Yesterday, the performance of this script was improved by focusing on a few bottlenecks. Runtime has been reduced dramatically—down to two minutes and 31 seconds for the same 50 computer inventory. Not bad!

Image of command output

CPU utilization varied, but it was mostly around 15%.

Image of performance data

Now the question is, “How can we improve this further? Is it even possible?”

Using the same technique as yesterday, let’s start with the Measure-Command cmdlet:

Measure-Command {$services = Get-WmiObject win32_service `

                               -ComputerName $computerName}

This script provided the following information:

WMI query: < 0.25 seconds for responding local computer

WMI query: < 1.00 seconds for responding remote computer

WMI query: < 20.00 seconds for non-responding computer

Ping dead computer: < 2.5 seconds

In the test scenario, I used my desktop computer (no, I do not have 50 computers in my lab). The projection is 47*0.25 + 3*20 = 72 seconds. Knowing the script actually takes 148 seconds, the overhead is about 50%. Knowing that it is impossible to improve the time to make the WMI query, the only option is to run the queries in parallel. The total amount of data will be the same, so the gain will have to come from the latency time it takes to perform a WMI query.

Windows PowerShell 2.0 has functionality to support concurrency.

First method: Get-WMIObject –ComputerName

WMI can be used directly. The next screen capture shows the time it takes WMI to query 50 computers in a single statement:

Image of command output

The performance is great, but the data would need to be parsed based on the systemName field, and the script structure would have to be modified considerably.

Second method: Invoke-Command

invoke-command -ScriptBlock `

                 {

                 param($computerName);

                 Get-WmiObject `

                     -class win32_service `

                     -ComputerName $computerName

                 } `

              -asJob `

              -JobName "$i - $computerName" `

              -ThrottleLimit 3 `

              -ArgumentList $computerName `

              -ComputerName LocalHost 

Invoke-Command looks promising. It has a ThrottleLimit parameter, which limits the number of network connections, not the number of jobs running on a computer, as demonstrated in the following screen capture.

Image of command output

Invoke-Command would be nice if the queries were run on remote computers, but this is not the case. However, spreading the load across multiple computers would certainly be something to consider for the next round of optimization.

Invoke-Command needs the shell to be run as administrator. Permissions could be delegated to run Invoke-Command as a regular user, but this would increase the complexity.

Third method: Start-Job

Start-Job -ScriptBlock `

            {

            param($computerName);

            Get-WmiObject `

                -class win32_service `

                -ComputerName $computerName

            } `

          -Name "$i - $computerName" `

          -ArgumentList $computerName

Start-Job creates a background process for each WMI query. The result is great. Our script can now do 50 computers in about 42 seconds.

Image of command output

However, the script had to be modified—the first part now starts all the jobs, and the second part processes the results. The main differences are the following lines:

foreach ($computerName in $computers)

    {

    $workBook.workSheets.add() | Out-Null

    $workBook.workSheets.item(1).name = "$computerName"

    Start-Job -ScriptBlock `

                {

                param($computerName);

                Get-WmiObject `

                    -class win32_service `

                    -ComputerName $computerName

                } `

              -Name "$computerName" `

              -ArgumentList $computerName

    } #forEach computer

To generate all the jobs, use the following code to retrieve the information from the job queue.

while (@(Get-Job -State Completed).count -gt 0)

    {

    "============"

    $currentJobName = (Get-Job -State Completed |

                                   Select-Object -First 1).name

    $services = Receive-Job -name $currentJobName

    Remove-Job -Name $currentJobName

#same code to paste in Excel…

When the jobs are running, they can be in one of several states: Running, Completed, or Failed. There are other states, but these are the states of interest for this script. To process the jobs, the data needs to be retrieved when the job is completed. This is done with the Receive-Job cmdlet. Note that the data returned is serialized—this means that the script is not receiving the real objects as in the previous scripts, but rather a XML representation. In this scenario, this is irrelevant because this data will be converted to an Excel spreadsheet. When the data has been retrieved, the job is removed to free up resources.

WOW, we went from 90 minutes, down to three minutes, down to 42 seconds to document 50 computers. The end result, the Excel report (for the auditors), is the same for all three versions. This is great performance but can it scale up? The test desktop computer can handle 50 computers. Can it do 100, 200, 500, or even 1000 computers?

Well, let’s try!

Here are the results for 100 computers:

Image of performance data

Image of command output

The computer list includes some nonexistent computers to test nonresponding computers.

Here are the script results:

97     seconds (01 minute  37 seconds) for 100 computers.

380   seconds (06 minutes 20 seconds) for 250 computers.

1228 seconds (20 minutes 28 seconds) for 500 computers.

3450 seconds (57 minutes 30 seconds) for 1000 computers.

 Image of performance data

Scaling up requires additional care. For example, the computer may run out of resources and the script performance could seriously degrade. Worst, some data might be lost. Fortunately in this case, resolving this issue has an interesting side effect. It actually increases the performance!

To avoid running out of resources, the script will be tweaked based on a rude implementation of a producer–consumer design pattern. In other words, by limiting the number of concurrent jobs based on the speed the data is generated versus the speed it can be imported into Excel. This improved the script down to these numbers:

94   seconds (01 minute  34 seconds) for 100 computers.

215 seconds (03 minutes 35 seconds) for 250 computers.

450 seconds (07 minutes 30 seconds) for 500 computers.

904 seconds (15 minutes 04 seconds) for 1000 computers.

The last modification to the script was to move the code that retrieves the data from the job, to paste it into Excel to a function (the consumer), and to call this function from the loop that generates the jobs (the producer).

foreach ($computerName in $computers)

    {

    $computerName = $computerName.trim()

    $workBook.workSheets.add() | Out-Null

    $workBook.workSheets.item(1).name = "$computerName"        

    "Creating sheet for $computerName"

    Start-Job -ScriptBlock `

                {

                param($computerName);

                Get-WmiObject `

                    -class win32_service `

                    -ComputerName $computerName

                } `

              -Name "$computerName" `

              -ArgumentList $computerName

    if (@(get-job).count -gt 5) {Get-CompletedJobs}

    } #forEach computer

The key line is:

     if (@(get-job).count -gt 5) {Get-CompletedJobs}

This line stops the creation of new jobs while removing completed jobs from the job queue. You may want to experiment with the count greater than 5, depending on the time it takes to complete the WMI query versus the time it takes to paste it into Excel. Starting to process the data as it is being generated overlaps the overhead mentioned previously and produces the performance gain.

Image of command output

Optimizing is always a question of balance. How much time do you want to invest to further optimize a script? There comes a point where the gain is not worth the pain.

Having optimized the performance, the next challenge will be to scale this script from one thousand to several thousand computers. As we will see tomorrow, we will need to take a new approach to address this.

BTW, I shaved one additional minute off the 1000 computer data by changing this line:

$excel.visible = $true

To this:

$excel.visible = $false

~ Georges

Thank, Georges. The zip file that you will find in the Script Repository has all the files and scripts from Georges this week. Please join us tomorrow for the conclusion of the series.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>