Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Use PowerShell to Create Deployment Shares for MDT

$
0
0

Summary: Learn how to use Windows PowerShell to create deployment shares for for Microsoft Deployment Toolkit 2010 Update 1.

Microsoft Scripting Guy, Ed Wilson, is here. Windows PowerShell MVP, Sean Kearney, is back with us this week to talk about the Microsoft Deployment Toolkit (MDT).

Like many new technologies from Microsoft, Windows PowerShell is a key core management piece. There are very few modern systems from Microsoft that are not enabled with Windows PowerShell. For that matter, there are very few legacy-based systems that cannot be extended and improved with Windows PowerShell.

So today, we’ll look at MDT.

For those of you who have been living on the Planet Zackarus VII in the Platarchian Sector, I’m talking about Microsoft Deployment Toolkit 2010 Update 1. MDT is sitting right up there with WinRE and Windows PowerShell as my favorite free tools from Microsoft (and if I listed all of the available free tools from Microsoft, that would be a massive blog post unto itself).

MDT can take any of the Microsoft operating systems—from Windows XP SP3 all the way to Windows Server 2008 R2 DataCenter edition—and automate it’s installation, including roles, features, settings and application installations. By default, MDT without any extra effort will create what a Light Touch Installation (LTI), and with some additional configuration, you can create a One Touch Installation (OTI).

However, today we are going to go to learn about how you leverage Windows PowerShell with MDT. We are going to presume that you have at least installed MDT on a computer and you are staring at it, wondering where to start.

Stage one: Create a Deployment Share

The first thing you want to do is create a Deployment Share, which is exactly what you think it is. A share on the computer that will contain what you need to deploy systems. Within the MDT console, go to Deployment Shares.

Image of folder

Click New Deployment Share to create a share and follow those lovely step-by-step instructions—you know, the obvious stuff that bothers us all. Should I capture an image? Ask the user to set a local administrator password? Ask the user for a product key?

Let us skip the debate over the follies and foolishness of giving users product keys, setting local administrative passwords, and perhaps letting them cook dinner. Today we are using the Windows PowerShell cmdlets in MDT 2010. Therefore, when you have completed the previous task in MDT 2010, you will see a traditional View Script button.

Image of button

A typical sample script from MDT is just a basic two liner. One line adds the snap-in to allow use of the MDT cmdlets, and the next line is your actual cmdlet that does all the real work.

Add-PSSnapIn Microsoft.BDD.PSSnapIn
new-PSDrive -Name "DS002" -PSProvider "MDTProvider" -Root "C:\MyDeploymentShare" -Description "My Deployment Share" -NetworkPath \\MyComputer\MyDeploymentShare -Verbose | add-MDTPersistentDrive –Verbose

Now, all of the cmdlets that I have used in MDT work fine except for (ironically) the first one I encountered. For whatever reason, it misses some steps when you run it—primarily, creating the new folder, enabling the share, and setting the variables in CustomSettings.ini.

I will state this again…MDT is free. Somebody made an honest mistake in a free product, so rather than be an irritant and yell and whine, I thought I would do something productive and build on what I have. So we’ll build a script to meet that need.

Our first task: Make a folder.

NEW-ITEM –type Directory –path C:\FolderNew

Then of course, share that folder. A standard MDT share allows Full control to everyone, but that is not needed. For deployment to work, you only need ReadOnly access. Therefore, we will share our new folder the “old school” way:

([wmiclass]”win32_share”).Create(“C:\FolderNew”,”ShareNew”,0)

Now, if we edit the sample script in Windows PowerShell to match this and execute it, we have the following:

Add-PSSnapIn Microsoft.BDD.PSSnapIn
NEW-ITEM –type Directory –path C:\FolderNew
([wmiclass]”win32_share”).Create(“C:\FolderNew”,”ShareNew”,0)
new-PSDrive -Name "DS003" -PSProvider "MDTProvider" –Root “C:\FolderNew" -Description "New Share" -NetworkPath \\MyComputer\MyDeploymentShare -Verbose | add-MDTPersistentDrive –Verbose

It would now echo (for the most part) the creation of a Deployment Share. To make this actually useful, we can use Windows PowerShell variables instead so that we can switch this into useful script later on. To get really creative, let’s have Windows PowerShell tell us the computername and populate the UNC pathname properly. This is one of those times we are going to steal from Console land with the use of $ENV because it already has a variable with the name of the computer.

Also in MDT, you’ll see a reference to a name like DS002. This is the unique name for each Deployment Share in MDT. I could do something really cool like figure out the last one in sequence, but I found that it can be any random name. Therefore, we can GET a RANDOM number and use that with DS in the name…provided it has not been used before.

$DSNAME=”DS”+((GET-RANDOM 999999999).tostring().trim())

If we run the cmdlet Get-MDTPersistentDrive, we can get a list of all the Deployment folders that are currently attached to MDT and their names and locations.

$ListofShares=(GET-MDTPersistentDrive)

I can then use Select-String to compare their output and continue randomly building Deployment Share reference names until I find one that is not taken.

Do {
$DSNAME="DS"+((GET-RANDOM 999999999).tostring().trim())
} Until ( ! ( $ListOfShares | Select-string -Pattern $DSNAME))

Therefore, our modified script using variables will look like this:

Add-PSSnapIn Microsoft.BDD.PSSnapIn

$Folder=’C:\FolderNew’
$Share=’ShareNew’
$Description=’New Share’
$ComputerName=$ENV:Computername
$UNCShare=”\\$Computername\$Share”
$ListofShares=(GET-MDTPersistentDrive)

Do {
$DSNAME="DS"+((GET-RANDOM 999999999).tostring().trim())
} Until ( ! ( $ListOfShares | Select-string -Pattern $DSNAME))

NEW-ITEM –type Directory –path $Folder
([wmiclass]”win32_share”).Create($Folder,$Share,0)
new-PSDrive –Name $ -PSProvider "MDTProvider" –Root $Folder –Description $Description –NetworkPath $UNCShare -Verbose | add-MDTPersistentDrive –Verbose

Now I said “for the most part” because there are some values that are created for the automation of the operating system within CustomSettings.ini that are prompted for, but not echoed, by the sample script. I am, of course, referring to the three prompts: Prompt User for Administrator Password, Prompt User for Product Key, and Capture Image.

Nevertheless, for now we have the rudimentary structure of a simple script to automate the ability to build a New Deployment Share in MDT without touching the…*shudder*…GUI.

Tomorrow we will discuss a simple idea for editing CustomSettings.ini without using Notepad (or Edlin for you tough guys), and then we will build all of this into a new cmdlet.

Cheers and remember, the Power of Shell is in YOU.
~Sean, the Energized Tech

Thank you, Sean.

Be sure to come back tomorrow for Part 2.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Use PowerShell to Work with the MDT CustomSettings.ini File

$
0
0

Summary: Microsoft PowerShell MVP, Sean Kearney, shows how to use Windows PowerShell to work with the MDT CustomSettings.ini file.

Microsoft Scripting Guy, Ed Wilson, is here. This week, Windows PowerShell MVP, Sean Kearney, is our guest blogger, and he is writing about Microsoft Development Kit Update 1 (MDT).

All throughout history, there have been great teams: Abbott and Costello, Sears and Roebuck, hotdogs and ketchup. Today, we continue with another great team: MDT and Windows PowerShell.

Yesterday, we showed you how to improve the stock cmdlet for making a new Deployment Share in MDT. We found that it was like some of the greatest symphonies—a little unfinished, but we polished it up. But I mentioned that there was another feature that is not echoed in our cmdlet or script. When you create a new Deployment Share, MDT asks you three questions…three questions that cause the greatest of philosophers to debate into the night the answer to life, the universe, and...everything.

(Pssst, it’s 42, and it works in Base 13.)

Now that I have angered several philosophers, and possibly put them out of work, let us step back to MDT…

MDT prompts us with the following key questions about the base ability of this share:

  • Ask if an image should be captured (selected by default)
  • Ask user to set the local Administrator Password (not selected by default)
  • Ask user for a product key (not selected by default)

When you are finished with the standard wizard, these are converted to three settings in a file called CustomSettings.ini, which is located under the Control subfolder in your newly created Deployment Share. By default this file will look like this (depending on the options you have selected):

[Settings]
Priority=Default
Properties=MyCustomProperty

[Default]
OSInstall=Y
SkipAppsOnUpgrade=YES
SkipCapture=YES
SkipAdminPassword=YES
SkipProductKey=YES

We will be dealing with the last three lines and their settings. Fortunately, the actual variable matches exactly what it is doing. The answer is obvious too. In the GUI, all you have to know is if you didn’t select it, the answer for that option will be No; otherwise, it will be Yes.

That sounds like a very “Boolean” answer to me, so we could actually add some simple parameters to ask for a $True/$False as part of the script. When we build it, we’ll put in three Boolean parameters like this:

[Parameter (Mandatory=$false)]
[Boolean] $PromptPassword,
[Parameter (Mandatory=$false)]
[Boolean] $PromptKey,
[Parameter (Mandatory=$false)]
[Boolean] $CaptureImage

The next task, of course, is…well…how do we edit it? Because CustomSettings.ini is a text file, we can get away with building a simple “search and destroy.” This could be accomplished probably far better with regular expressions (Oh, Dr. ReGeX ! Wherefore art thou!), but we’ll try something simpler. Let us get the data, store it away in a Windows PowerShell variable, and pass that through Select-String.

Why, you wonder? We may have to edit three lines or no lines. We need the data in memory so we can work with it as many times as we need it. So because we’re going to build on our original script, we’ll use the following script:

$CustomSettingsINI=(GET-CONTENT “$Folder\Control\CustomSettings.ini”)

If you did not pick up…that’s a little trick that you can do with variables in Windows PowerShell. If you are assigning a value, if it’s within double quotation marks, and it contains a Windows PowerShell variable (for example, $Folder), Windows PowerShell will expand it to its actual value when assigning it.

Now we can use Select-String on one of the lines to find its location in the file as shown here:

$CustomSettingsINI | SELECT-STRING –pattern “SkipProductKey”

If you have played with Select-String, you know that one of the properties returned with that object is LineNumber, which is the row where it found your data within an array. So I can easily access the value of the variable $CustomSettingsINI by referencing the results of the first search.

$CustomSettingsINI[($CustomSettingsINI | SELECT-STRING –pattern “SkipProductKey”)]

But this would fail because an array always starts counting at 0, but the row count starts at 1. If Select-String finds your data, you must bump back the count by 1.

$CustomSettingsINI[(($CustomSettingsINI | SELECT-STRING –pattern “SkipProductKey”)-1)]

We’ve found the row with the value. How do we edit it? The choice is up to you. You can get really fancy or just do something dead simple. (Remember, you can always go back to your script and improve how you did it later.)

I am going to get a little bit fancy. Because the parameter is going to be a Boolean $True/$False only, we can build a tiny array of “Yes” and “No” for the values, and have it flip that value simply dependent on the Boolean value. To convert a Boolean $True/$False to a value like one or zero we simply do this:

[int]$SomeBooleanValue

Now for my array, I was not kidding…it is a tiny two-member array with the values of “Yes” or “No” to edit into the value depending on our parameter:

$YESNO=("NO","YES")

You will notice that it is backwards. In our case, if the box is selected ($True), we want to make sure the answer to the value is “Yes.” So a Boolean $False (0) will yield “No” and a Boolean $True (1) will yield “Yes.” I do that with this little bit of trickery:

$CustomSettingsANSWER=$YESNO[([int]$SkipProductKey)]

Now the rest the answer is up to you. I will be honest. I cheated. I sat down and wrote an “If” statement for the three Boolean tests. That feels like a cheat to me because I am repeating a procedure. But then again, as I have said before, this may well get me my answer in the short term. I can always go back and improve on the script with the time that I have saved by not sitting in the GUI all day.

So let us pull the pieces together and turn all of this into a new advanced function called NEW-MDTDeploymentShare.

function global:NEW-MDTDeploymentShare()

{

[CmdletBinding()]
param(
[Parameter (Mandatory=$true)]
[String] $Folder,
[Parameter (Mandatory=$true)]
[String] $Description,
[Parameter (Mandatory=$true)]
[String] $Share,
[Parameter (Mandatory=$false)]
[Boolean] $PromptPassword,
[Parameter (Mandatory=$false)]
[Boolean] $PromptKey,
[Parameter (Mandatory=$false)]
[Boolean] $CaptureImage
)

Process
{
Add-PSSnapIn Microsoft.BDD.PSSnapIn

$ListOfShares=GET-MDTPersistentDrive
Do {
$DSNAME="DS"+((GET-RANDOM 999999999).tostring().trim())
} Until ( ! ( $ListOfShares | Select-string -Pattern $DSNAME))

# Get NETBIOS name of computer
$ComputerName=$ENV:Computername

# Create Folder for Deployment Share
new-item -type Directory -path $Location

# Create Network Share for Deployment Share
$UNC=”\\$Computername\$Sharename"
([wmiclass]"Win32_share").Create($Location,$Sharename,0)

# Create Deployment Point within MDT
new-PSDrive -Name $DSNAME -PSProvider "MDTProvider" –Root $Location –Description $Description –NetworkPath $UNC -Verbose | add-MDTPersistentDrive –Verbose

# Based upon Supplied parameters, Customize the Custom.ini
# Contained with the specific Deployment point
$YESNO=("NO","YES")
$CustomINI=(GET-CONTENT "$Location\Control\CustomSettings.INI")
IF ($PromptPassword)
{
$AnswerValue=$YESNO[([int] $PromptPassword)]
$CustomINI[(($CustomINI | SELECT-STRING -Pattern "SkipAdminPassword").LineNumber)-1]="SkipAdminPassword=$AnswerValue"
}
IF ($PromptKey)
{
$AnswerValue=$YESNO[([int] $PromptKey)]
$CustomINI[(($CustomINI | SELECT-STRING -Pattern "SkipProductKey").LineNumber)-1]="SkipProductKey=$AnswerValue"
}
IF ($CaptureImage)
{
$AnswerValue=$YESNO[([int] $CaptureImage)]
$CustomINI[(($CustomINI | SELECT-STRING -Pattern "SkipCapture").LineNumber)-1]="SkipCapture=$AnswerValue"
}
$CustomINI | SET-CONTENT -path "$Location\Control\CustomSettings.ini"
}
}

There are obviously many ways that we can build on and improve on this—error checking, adding some Help, perhaps even rewriting how I prompted for the settings in CustomSettings.ini.

But what this does give us is a way to fully automate the creation of Deployment Shares. It shows us that we can also improve existing tools that are provided to us if they do not meet our needs.

So with this added to my profile or a module, I can now do something as simple as this:

NEW-MDTDeploymentShare –Folder ‘C:\MyShare’ –Description ‘Another MDT Share’ –Share ‘ThisShare’

MDT will do all the work, or I can have a CSV file with configurations for a lab environment and simply execute the script to rebuild it as I need.

Tomorrow we will dive into the real power of MDT—bringing in data to make that image useful, and utilizing Windows PowerShell to repeat those tasks easily.

~Sean

Thanks, Sean!

Join us tomorrow for Part 3.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Organize MDT Application and Driver Folders

$
0
0

Summary: Learn how to use Windows PowerShell to organize MDT folders for drivers, applications, and operating systems.

Microsoft Scripting Guy, Ed Wilson, is here. MDT Week with Windows PowerShell MVP Sean Kearney continues today.

Yesterday, we learned how to extend one cmdlet in MDT into a useful advanced function that you can use to rebuild Deployment Shares on the fly. Now we will show the raw power of MDT—the ability to easily import applications, operating systems, and drivers into a well-organized image.

Really, all you are doing on the physical side is creating a folder structure under the Deployment Share. MDT presents a nice uniform interface to perform that task and gives you Windows PowerShell cmdlets to easily repeat it.

We are going to presume that you’ve used MDT at least once and that you are familiar with its structure.

Functionally, on the Windows PowerShell side (and even in MDT), when you’re importing an operating system, application, or an out-of-box driver or package, you will have two options: create a folder or import the package into a folder.

The reason you will want to create folders is the same reason you normally would (and should) create folders. Organization. The key to MDT is organization. The more organized (and in some cases granular) your structure is, the better you can select it later when you create Task Sequences and Selection Profiles.

So let us see where Windows PowerShell fits into this equation. When we do something like create a folder to import a single application, we normally get a Window like this:

Image of settings

We will have a Windows PowerShell script provided with it like this one:

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\MyDeploymentShare"
new-item -path "DS001:\Applications" -enable "True" -Name "Base Applications" -Comments "Programs every computer will have" -ItemType "folder" -Verbose

And as we import applications into that folder structure, scripts will be provided at each stage of our journey like this one:

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\MyDeploymentShare"
import-MDTApplication -path "DS001:\Applications\Base Applications" -enable "True" -Name "Adobe Acrobat Reader 9" -ShortName "Acrobat Reader" -Version "9" -Publisher "Adobe" -Language "Klingon" -CommandLine "msiexec.exe /I AcroRead.msi /qb /norestart" -WorkingDirectory ".\Applications\Adobe Acrobat Reader 9" -ApplicationSourcePath "C:\Adobe Reader 9" -DestinationFolder "Adobe Acrobat Reader 9" –Verbose

So if we look at the script and pull it apart, here is a list of details that we need to import the applications into MDT.

ApplicationFolder
DescriptiveName
ShortName of Application
Version
Publisher
Language
CommandLine (to launch the silent install)
WorkingDirectory
SourceFolder
DestinationFolder

This could easily be data that we have in a CSV file—especially if we are doing something like rebuilding a deployment point constantly. Or perhaps we are selling a solution using MDT for clients in a SMB. There are certain applications that you might consistently place within those deployments because there is no issue with licensing (for example, Silverlight, Live Meeting Client, or other Browser add-ons).

Rather than, “Every time I set up MDT for a new client, I have to type type type,” why not let Windows PowerShell do all the work while you collect all the pay?

We could take all of those variables we need and populate them into a CSV file with a format like the following (we will use the parameters we have for Adobe as our example):

"ApplicationFolder","DescriptiveName","ShortName","Version",Publisher","Language","CommandLine","WorkingDirectory","SourceFolder","DestinationFolder"
DS001:\Applications\Base Applications”,"Adobe Acrobat Reader 9","Acrobat Reader","9","Adobe","Klingon","msiexec.exe /I AcroRead.msi /qb /norestart",".\Applications\Adobe Acrobat Reader 9","C:\Adobe Reader 9","Adobe Acrobat Reader 9"

Now we have an easier way to redo this scenario. Rather than rekeying the source paths and details every time, we can have it all within a CSV file. For one application, it’s nice—but imagine if we had Silverlight or LiveMeeting Client…perhaps even some free antivirus or that WebEx client the boss insists must be on every client deployment? You could populate all of that information in your CSV file and then simply do this instead to populate all of it into MDT:

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\MyDeploymentShare"

$List=IMPORT-CSV C:\Powershell\MDTCommonApps.csv

Foreach ($App in $list) {

import-MDTApplication –path $App.ApplicationFolder -enable "True" –Name $App.DescriptiveName –ShortName $App.Shortname –Version $App.Version –Publisher $App.Publisher –Language $App.Language –CommandLine $App.CommandLine –WorkingDirectory $App.WorkingDirectory –ApplicationSourcePath $App.SourceFolder –DestinationFolder $App.DestinationFolder –Verbose

}

The beautiful part about MDT is importing additional patches, drivers, applications (and even operating systems) by following the same format. Identify what information you could drop into a CSV file, get the appropriate sample scripts, and edit them as needed.

For example, let’s say are importing drivers. Here are the sample scripts that I received for creating a folder called Network and importing all of my network drivers into it after creating a subfolder for each computer model:

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\testing"
new-item -path "DS001:\Out-of-Box Drivers" -enable "True" -Name "Network" -Comments "" -ItemType "folder" –Verbose

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\testing"
new-item -path "DS001:\Out-of-Box Drivers\Network" -enable "True" -Name "DellLatitude6410" -Comments "All Network Drivers for the Latitude 6410" -ItemType "folder" –Verbose

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\testing"
import-mdtdriver -path "DS001:\Out-of-Box Drivers\Network\DellLatitude6410" -SourcePath "C:\dell\drivers\Nic\6410" –Verbose

I simplify it by removing the repetitive lines because all the first two lines do is add the snap-in so we can use the cmdlets in MDT. Then I create a temporary mount point into the MDT deployment structure.

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\testing"

new-item -path "DS001:\Out-of-Box Drivers" -enable "True" -Name "Network" -Comments "" -ItemType "folder" –Verbose

new-item -path "DS001:\Out-of-Box Drivers\Network" -enable "True" -Name "DellLatitude6410" -Comments "All Network Drivers for the Latitude 6410" -ItemType "folder" –Verbose

import-mdtdriver -path "DS001:\Out-of-Box Drivers\Network\DellLatitude6410" -SourcePath "C:\dell\drivers\Nic\6410" –Verbose

Now we can consistently import drivers and keep them organized by Model and Driver type. We need only a small set of information for a CSV file (by stepping through each cmdlet one line at time). But really, we are performing two tasks. First, we are creating a folder structure for the drivers; and secondly, we are importing the drivers. We can get really complex or just stick to the KISS principle (Keep It Simple, Sir).

(No, I don’t mean singing, “I was made for lovin’ you baby baby.”)

The first file will simply be the folders that we want to create, probably with a simply header like this:

FolderName

Which means that our sample CSV file might look like this:

“FolderName”,”Comments”
”Network”,”All of Our Network Drivers”
”Video”,”Ummmm Video drivers too”
”Storage”,”I guess we need storage drivers”

The second file will be the locations of the sources and destinations of the data that we need to bring into MDT. Here is the data we will need:

DriverType
Comments
Destination Path
Model
DriverSource

Our sample CSV file could look like this:

“FolderPath”,“DriverType,”Comments”,”Destination Path”,”Model”,”DriverSource”
“DS001:\Out-of-Box Drivers”,“Network”,”All Network Drivers for the Latitude 6410”,”DS001:\Out-of-Box Drivers”,”Dell”,”C:\Dell\Drivers\Nic\6410”

Then to import these drivers, we would simply need to run a script similar to the last one. Import the data from a CSV file, execute the cmdlet based on the imported data, and then run home for lunch. We will have to do this twice for the drivers—once for the folder creation and once to import the actual data.

Add-PSSnapIn Microsoft.BDD.PSSnapIn
New-PSDrive -Name "DS001" -PSProvider MDTProvider -Root "C:\MyDeploymentShare"

$Folders=IMPORT-CSV C:\Powershell\DriverFolders.csv

Foreach ($Folder in $Folders) {

new-item -path "DS001:\Out-of-Box Drivers" -enable "True" –Name $Folder.Foldername –Comments $Folder.Comments -ItemType "folder" –Verbose

$List=IMPORT-CSV C:\Powershell\MDTDrivers.csv

Foreach ($Driver in $list) {

new-item -path "DS001:\Out-of-Box Drivers\Network" -enable "True" -Name "DellLatitude6410" -Comments "All Network Drivers for the Latitude 6410" -ItemType "folder" –Verbose

import-mdtdriver -path "DS001:\Out-of-Box Drivers\Network\DellLatitude6410" -SourcePath "C:\dell\drivers\Nic\6410" –Verbose

}

}

If we had to rebuild this system for disaster recovery purposes, or if we needed to easily redeploy it for clients as a solution, we simply need the drivers that we are going to deploy in addition to a little organization. By the way, if you are providing solutions for a small business, this is a great time to keep the “S” word in mind. Standardization. Trying to keep your workstation models as common as you possibly can across a client base allows you to use a more minimal driver set. You can, of course, use whatever drivers you want with MDT, but it is nice to keep a deployed image that works the same across all platforms.

If you had to repeat these techniques for importing patches or applications, the pattern is repetitive. Simply identify what data you need to repeat in the cmdlet, place that into a CSV file, and import away!

In all cases, you should be documenting your server build for MDT, but having that server build as an easily repeatable process pays for itself in the long term.

Next time, we will look at how to really get funky and down with MDT by fully automating the installation and creation of an MDT deployment share.

~Sean

Awesome, Sean. I am looking forward to Part 4 tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Apply a Service Pack to MDT Deployment Shares by Using PowerShell

$
0
0

Summary: Microsoft PowerShell MVP, Sean Kearney, shows how to use Windows PowerShell to update MDT Deployment Shares to the latest service pack.

Microsoft Scripting Guy, Ed Wilson, is here. It is time for Part 4 of Windows PowerShell MVP, Sean Kearney’s, most excellent series about using Windows PowerShell with the MDT. Yesterday, we learned about using Windows PowerShell to organize MDT application and driver folders. Today Sean will show us how to update MDT Deployment Shares to the latest service pack by using Windows PowerShell.

One of the coolest things about having MDT to control your deployment setup is the ease of changing the environment. Another cool feature comes when it is time to update the deployed software.

If you look at your Deployment Share, it’s just a well-organized structure of folders. These folders are organized right down to operating systems and applications.

Although we tell MDT the version of the operating system and the applications, it really has no clue what’s in there. To MDT, it’s just a folder with some stuff in it. Knowing this means that updating components to, for example, newer service packs is actually quite easy.

Therefore, to update Windows 7 to Windows Service Pack 1, we could delete the contents of the folder that contains Windows 7 and replace it with the contents of new media with Service Pack 1 through the GUI.

We could do that…we could do that if we were living in a world of sadness where Windows PowerShell did not exist.

But because such a tragic thing never happened thanks to our great creative friends on the Windows PowerShell team, we absolutely can use Windows PowerShell for this task.

So our first task in MDT is to determine the assigned Name for your MDT Deployment Share. To do this, use the Add-PSSnapin to add in the appropriate snap-in. Next, use the Get-MDTPersistentDrive cmdlet to retrieve drive information. These two commands are shown here.

Add-PSsnapin Microsoft.BDD.PSsnapin

GET-MDTPersistentDrive

On the screen, you’ll see a list of Deployment Points that are accessible from MDT. In a new scenario, you’ll have one drive that is typically named DS001, but it could be any name. Let’s assume we’re going to work with the first one in the list. To do this, we can index directly into the first drive. This command is shown here.

(GET-MDTPersistentDrive)[0]

Now let’s look at that Cmdlet that assigns the DSName as a connection point. The command to assign the DSName uses the New-PSDrive cmdlet. The exact syntax appears here.

New-PSDrive –Name ‘DS001’ -PSProvider MDTProvider –Root ‘C:\DeploymentShare’

You’ll see that there are three available Properties that we are interested in. If we look at the display, it seems obvious that the one we want is Name for the DSName and Path for the root of the particular structure in MDT. To recreate this from the variables that we have just pulled, we can use the commands that follow.

$PersistentDrive=(GET-MDTPersistentDrive)[0]

$DSName=$PersistentDrive.Name

$PhysicalLocation=$PersistentDrive.Path

New-PSDrive –Name $DSName –PSProvider MDTProvider –Root $PhysicalLocation

So after having done this, we’ll actually have a new drive mount point named DSxxx: for MDT. Like any other Windows PowerShell provider, you can test it by using the Test-Path cmdlet, or simply use the Set-Location cmdlet to change the working location to the newly created drive. The following command changes the working location.

SET-LOCATION “$DSName:”

You can update the operating system in MDT in one of two ways. The most obvious way is to add it as a service pack in MDT. Unfortunately, this is the slower of the two processes because it actually has to run the service pack installation upon deployment of a new operating system. The other option is to obtain newer media with the service pack preinstalled. The media, of course, must be the same version as what you had before (for example, Enterprise, OEM, or Retail).

To update the operating system, we need to know (at least) the folder where it was installed. If you use the Get-ChildItem cmdlet, you can see the folder names on the Deployment Share. The following command assumes your DSNAME is DS001.

GET-CHILDITEM “$DSNAME:\Operating Systems”

If you are curious about which revisions of Windows exist (assuming a folder of Windows 7), you can execute the following command.

GET-CHILDITEM ‘$DSNAME:\Operating Systems\Windows 7” –recurse

Knowing the location, how could you update this? The process is easier than you think. It literally is removing the contents and replacing them. MDT has no real idea about the difference between a Service Pack 1 and an RTM release. It just holds a folder of full of information and a catalog. As long as you are replacing the same exact media with a newer service pack version, we can step to the “real world” and remove the folder contents. We don’t need to use the MDT for that. Remember that we obtained the PhysicalPath property earlier to tell us where the data is.

First we want to delete the old content. To do this, use the Get-ChildItem cmdlet, and pipe the objects to the Remove-Item cmdlet. This command is shown here.

GET-CHILDITEM “$PhysicalLocation\Operating Systems\Windows 7” –recurse | REMOVE-ITEM –recurse –force

Then we update the structure with the new contents from the DVD media (or an ISO file). Let’s presume that the location of the replacement media with Service Pack 1 is on Drive E. We literally copy the data back to the new location. Here is the command to accomplish this task.

COPY-ITEM E:\* –Destination “$PhysicalLocation\Operating Systems\Windows 7” –recurse –force

TaDa! Your operating system in MDT has now been updated to the most current service pack from the provided media. The cool part is that you don’t need to do anything other than rebuild your media. All of your task sequences and other configuration details are identical to what they were before.

To update your media, we’ll need to know what name you gave it. To find out the names within Windows PowerShell, use the Get-ChildItem cmdlet in a command similar to the one shown here.

GET-CHILDITEM “$DSName:\Media”

To update the Media for "WallaWalla”, I use the Update-MDTMedia cmdlet. This command is shown here.

UPDATE-MDTMedia “WallaWalla”

Pulling all of these different commands into a single script would look like this.

Add-PSsnapin Microsoft.BDD.PSsnapin

# GET Available MDT points

GET-MDTPersistentDrive

# Work on the First Deployment Share

$PersistentDrive=(GET-MDTPersistentDrive)[0]

# Get the DSname and Physical Location

$DSName=$PersistentDrive.Name

$PhysicalLocation=$PersistentDrive.Path

# Connect up the MDT PersistentDrive

New-PSDrive –Name $DSName –PSProvider MDTProvider –Root $PhysicalLocation

# Remove the contents of an old Windows 7 Folder

GET-CHILDITEM “$PhysicalLocation\Operating Systems\Windows 7” –recurse | REMOVE-ITEM –recurse –force

# Replace the contents with fresh DVD Media

COPY-ITEM E:\* –Destination “$PhysicalLocation\Operating Systems\Windows 7” –recurse –force

# Update the ISO and content folder for ‘WallaWalla’

UPDATE-MDTMedia “WallaWalla”

Sure, it took a little time to prepare. But knowing the steps involved, you could update multiple deployment points easily. The other option you have if you’re good with IMAGEX, is to apply a service pack to an operating system and then recapture it to a new .wim file. You can simply replace the INSTALL.WIM that is within your Windows 7 folder structure with your new Gold one (make sure to rename it INSTALL.WIM).

Tomorrow we’ll show you how to build a script to automatically build a basic deployment from scratch. You’ll of course have to tweak it, but getting it automatically built…well isn’t that what automation and scripting are all about?

~Sean

Sean! Dude, you rock! Thank you so much for this way cool blog post. I invite everyone to join us tomorrow for the exciting conclusion to Sean’s series about using Windows PowerShell and the MDT.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Automatically Install and Configure MDT

$
0
0

Summary: Microsoft PowerShell MVP Sean Kearney shows how to use Windows PowerShell to install and to configure MDT automatically.

Microsoft Scripting Guy, Ed Wilson, is here. Well, it is time for Part 5 of Windows PowerShell MVP, Sean Kearney’s, most excellent series about using Windows PowerShell with the MDT. Yesterday, we learned about using Windows PowerShell to update Deployment Shares to use the latest operating system service packs. Today Sean will show us how to install and to configure MDT automatically.

MDT is truly a synergy of automation. Think about the technologies it leverages. It isn’t only using Windows PowerShell—that’s only one of many pieces it uses. Telling an IT pro to only use Windows PowerShell is foolish.

I see jaws dropping all across the internet with that statement...“Sean said WHAT?!”

I love Windows PowerShell, that is true. But we have multiple tools available to us for different purposes. In an environment that doesn’t have Windows PowerShell, we still have console apps and VBScript to leverage. MDT provides the perfect example for how we can leverage all of these tools. Most of its environment is based on predefined well-written VBScript scripts. Internally, it uses components from the Windows Automated Installation Kit (Windows AIK) such as DISM.EXE and IMAGEX.EXE to build and capture its images. It uses Windows PowerShell to manage building the environment that manages MDT.

I would be a fool to at least not show you the coolest feature in MDT: Automating an installation of MDT and how to automatically build out a basic deployment point of an operating system, applications, and task sequence. Your eyes won’t have to touch VBScript for any of this, but then again, that’s MDT’s job.

So we’re going to use some presumptions…just a few.

  • We have a DVD rom drive on Drive D with the Windows 7 64-bit edition.
  • There is a folder available on USB called F:\MDTMedia\ that has five folders, MDT, WAIK, and Windows Powershell that contain our scripts and .csv files.
  • There is a shared folder on the network called \\CONTOSO-FILE\Installs that has Office 2010, AdobeReader 9, and a Drivers folder for our data.

This will give us a very basic deployment that you can build on for your own purposes.

So we’ll start with the installation of MDT. It will be in a .msi format. To make our life easier, on whatever source you’re placing MDT, you can toss the following line into a setupmdt.cmd batch, then install without prompting by executing.

start /wait msiexec.exe /I C:\MDTMedia\MDT\MicrosoftDeploymentToolkit2010_x64.msi /qb /norestart

Note: This is for the 64-bit version. You would change the x64 to x86 if you downloaded the 32-bit media.

If you’re feeling adventurous, you can at least automatically start the installation of the Windows AIK. The installer (for whatever reason) is designed as an interactive installation only, and it will not let you do a silent installation. But that doesn’t mean you can’t get it to at least start automatically, as shown here.

start /wait msiexec.exe /I C:\MDTMedia\WAIK\WAITAMD64.msi

So now, without lifting a finger (well, maybe one or two), we have at least a wait to get MDT to install without much fuss.

But wait! We’re not done! We’re going to leverage the module we built in Part 1 to build that Deployment Share. To make a mini module from the scripts from Part 1 and Part 2, save the content as MyMDTModule.PSM1 file and store it in a folder called, for example, MYMDTMODULE.

If you have the MYMDTMODULE folder that contains your PSM1 located in F:\MDTMEDIA\Powershell\MDTSTUFF\MYMDTMODULE, you can now import it with the following script.

IMPORT-MODULE F:\MDTMEDIA\Powershell\MDTSTUFF\MYMDTMODULE

We could now create the new shared folder like this:

NEW-MDTDeploymentShare –Folder ‘C:\DeploymentShare’ –Description ‘Deployment Share’ –Share ‘DeploymentShare$’

Now what we need to know for the rest of this, is the DSName and number assigned to our Deployment Share. From this point, we’re going to run MDT cmdlets. Because the new module, MYMDTMODULE, has already loaded in the MDT snap-in, Microsoft.BDD.PSSnapin, we don’t need to load it twice.

And like last time, we’ll reconnect our persistent drive as follows:

GET-MDTPersistentDrive

$DSName=(GET-MDTPersistentDrive)[0].Name
$PhysicalPath=(GET-MDTPersistentDrive)[0].Path
$Description=(GET-MDTPersistentDrive)[0].Description

New-PSDrive –Name $DSName -PSProvider MDTProvider –Root $PhysicalPath

Remember that I said, “Presume we have Windows 7 on DVD”? We’re going to use a sample script provided by MDT and modify it to import the operating system.

new-item -path "$DSname:\Operating Systems" -enable "True" -Name "Windows 7 32bit" -Comments "Windows 7 32bit" -ItemType "folder" -Verbose
import-mdtoperatingsystem -path "$DSName:\Operating Systems\Windows 7 32bit" -SourcePath "D:\” -DestinationFolder "Windows 7 32bit" –Verbose

We’ll have MDT import some basic applications. I recommend breaking your deployment down as much as possible for easy selection and deselection at the selection profiles level. We’ll use a CSV file like before to import the applications, and we’ll use the same script as last time. Only now we can get a little more creative. Because we’ve already defined the DSName, let’s incorporate that into the script. And let’s eliminate any repetitive data in the CSV file (may as well save on typing and improve consistency wherever possible).

So our CSV file, which is located in F:\MDTMEDIA\Powershell\, will look like this now:

"ApplicationFolder","DescriptiveName","ShortName","Version",Publisher","Language","CommandLine","WorkingDirectory","SourceFolder","DestinationFolder"
"Applications\","Adobe Reader 9","Acrobat Reader","9","Adobe","Klingon","msiexec.exe /I AcroRead.msi /qb /norestart",".\Applications\Adobe Reader 9","\\Contoso-File\Installs\AdobeReader\","Adobe Acrobat Reader 9"
"Applications\","Office 2010 Standard","Microsoft Office","2010","Microsoft","Romulan","setup.exe",".\Applications\Office 2010","\\Contoso-File\Installs\Office 2010\","Office 2010 Standard"

It will now nicely work into this script:

$AppList=IMPORT-CSV C:\Powershell\MDTCommonApps.csv

Foreach ($App in $Applist) {

import-MDTApplication –path “$DSName:\$App.ApplicationFolder” -enable "True" –Name $App.DescriptiveName –ShortName $App.Shortname –Version $App.Version –Publisher $App.Publisher –Language $App.Language –CommandLine $App.CommandLine –WorkingDirectory $App.WorkingDirectory –ApplicationSourcePath $App.SourceFolder –DestinationFolder $App.DestinationFolder –Verbose

}

Now that we have an operating system and applications, we should have drivers. Remember that we did that in Part 3? Let’s adapt our CSV file and script in a similar manner so it now references the current DSName.

Our FolderName CSV file won’t need to change, only the one for the drivers. The new format will be similar to the one for applications where we no longer reference the DSName in the CSV file. Let’s also improve the script a bit to remove the extra DriverFolders.csv data file. Because “DriverType” is a mimic of the folder name, we’ll let Windows PowerShell use the Test-Path cmdlet to see if it exists, and build it if it does not exist.

"FolderPath","DriverType,"Comments","Destination Path","Model","DriverSource"
"Out-of-Box Drivers","Network","All Network Drivers for the Latitude 6410","Out-of-Box Drivers","Dell6410","\\Contoso-file\Installs\Drivers\Dell\Network"
"Out-of-Box Drivers","Video","Video Drivers for Latitude 6410","Out-of-Box Drivers","Dell6410","\\Contoso-file\Installs\Drivers\Dell\Video"

And now, we add this to our main script to import drivers automatically.

$DriverList=IMPORT-CSV F:\MDTMedia\Powershell\MDTDrivers.csv

Foreach ($Driver in $DriverList) {

IF (!(TEST-PATH “$DSname:\Out-of-Box Drivers\$Driver.DriverType”)) {
new-item -path "$DSName:\Out-of-Box Drivers" -enable "True" –Name $Driver.Drivertype –Comments “” -ItemType "folder" –Verbose
}

new-item -path "$DSName:\Out-of-Box Drivers\$Driver.Drivertype” -enable "True" –Name $Driver.Model -Comments "All Network Drivers for the Latitude 6410" -ItemType "folder" –Verbose

import-mdtdriver -path "$DSname:\Out-of-Box Drivers\$Driver.Drivertype\$Driver.Model" –SourcePath $Driver.Driversource –Verbose

}

The only things you have left to deploy are whatever task sequences you had defined. It’s as easy as importing an application. Create a CSV file with the defined data and then import it. The task sequences I like best are those that perfectly imitate the GUI.

Here’s a sample task for a standard client task replace sequence. (This would be a typical scenario in MDT, replacing a workstation). How did I do this? I simply went to MDT, went to create a new task sequence, and took the sample cmdlet at the end!

In this particular sequence, we have created a folder based on location in the task sequences, and to keep it organized, we have predefined the Administrator password.

import-mdttasksequence -path "DS002:\Task Sequences\Wallawalla” -Name "Deploy Windows 7" -Template "Client.xml" -Comments "" -ID "Deploy001" -Version "1.0" -OperatingSystemPath "DS002:\Operating Systems\Windows 7\Windows 7 PROFESSIONAL in Windows 7 install.wim" -FullName "Contoso" -OrgName "Contoso" -HomePage "www.contoso.com" -AdminPassword "Secret123!" -Verbose

So again, we grab the details that we need for the CSV file as follows:

“LocationName”,”TaskName”,”Template”,”Comments”,”TaskID”,”Version”,”OSPath”,”FullName”,”OrgName”,”HomePage”,”AdminPW”
”Wallawalla","Deploy Windows 7","Client.xml","","Deploy001","1.0","Operating Systems\Windows 7\Windows 7 PROFESSIONAL in Windows 7 install.wim" ,"Contoso" ,"Contoso" ,”www.contoso.com”,"Secret123!"

And now, we add a few lines in the script:

$TaskList=IMPORT-CSV F:\MDTMEDIA\Powershell\MDTTasks.csv

Foreach ($Item in $TaskList) {

import-mdttasksequence -path "$DSName:\Task Sequences\$Item.LocationName" –Name $Item.tasknam –Template $Item.template –Comments $Item.Comments –ID $Item.TaskID –Version $Item.Version -OperatingSystemPath "$DSName:\$Item.OSPath" –FullName $Item.Fullname –OrgName $Item.Orgname –HomePage $Item.homepage –AdminPassword $Item.AdminPW -Verbose

}

With a little preparation, we have developed the heart of our MDT Deployment Share, an operating system, applications, and drivers. Can we go further? Absolutely!

It’s completely possible to build media from scratch, throw in a customsettings.ini file that you would normally use, and even add service packs to the system. You can even build your selection profiles from MDT cmdlets. They turn into an XML file at the end, but you can create them easily with the cmdlets.

The beautiful part of MDT is that it is the ultimate example of automation. It uses console tools, VBScript, and Windows PowerShell—all of them used and leveraged in their own unique and special ways.

If you’re curious to learn more about MDT, I posted a series on Powershell.ca that ironically doesn’t contain much about Windows Powershell. It’s a fifteen part series called MDT 2010 – From Zero to Deploy. It takes you from the basics to adding that deployment to Windows Deployment Services.

By the way, if you ever get to meet Michael Niehaus, thank him for MDT. Just like Windows PowerShell, sometimes the best things in life are free.

~Sean

Thank you, Sean, for a great series of blog posts. MDT Week will continue tomorrow when we will have a guest blog written by Microsoft evangelist and author, Matt Hester.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Learn How to Use PowerShell to Automate MDT Deployment

$
0
0

Summary: Guest blogger, Matt Hester, shows how to use Windows PowerShell and MDT to automate deployment of Windows.

Microsoft Scripting Guy, Ed Wilson, is here. Today, guest blogger, Matt Hester, joins us to wrap up MDT Week.

Matt Hester has been involved in the IT Pro community for over 15 years. Prior to joining Microsoft, Matt was a successful Microsoft Certified Trainer for over eight years. After joining Microsoft, Matt continues to be heavily involved in the IT Pro community as an IT Pro evangelist, presenting to audiences nationally and internationally. In his role at Microsoft, Matt has presented to audiences in excess of 5000 and as small as 10. Matt has written four articles for TechNet Magazine. In addition, Matt has published two books with Sybex:

Take a test drive with Matt’s list of favorite Microsoft products and resources: 
Evaluation Downloads for Virtualization, Management, and Training Resources

Deploying Windows 7 is a hot topic that most of you or your departments are currently tackling. Hopefully, you are familiar with the Microsoft Deployment Toolkit 2010 (MDT). If you are not, the MDT is a FREE tool that provides you with a framework to create custom images for deployment in your environment. The images can be for servers or client computers. The tool helps you put together all the necessary components (such as the operating system, applications, and drivers) into a standard image. Additionally, you can create task sequences to make sure your deployment is run in the proper order and correctly. Then MDT will put all the pieces together in a custom image that you can deploy in your infrastructure.

Image of setup steps

The MDT images can be deployed via DVD, USB, a network share, or PXE boot—and the deployment can be physical or virtual. The secret sauce behind the MDT images is they are stored in the Windows Imaging (WIM) file format. WIM is designed to help deploy Windows technologies. The WIM file format is hardware agnostic, and unlike other imaging tools, you do not need a different image for each change in hardware. The only exception is that it is recommended you have a different image for 32-bit vs. 64-bit systems. You can learn more here: ImageX and WIM Image Format.

Built-in Windows PowerShell support

What makes MDT even greater is the fact that it has built-in Windows PowerShell support. As you move through the MDT wizards, you will see the ever friendly View script button. This gives you a way to learn the syntax of the MDT. More importantly, by copying and saving the scripts, you can give yourself a quick and dirty backup to re-create your MDT environment if you need to.

You can also load the MDT Windows PowerShell cmdlets through the Microsft.BDD snap-in. You may find it interesting that name of the snap-in is BDD—BDD is the acronym of the former tool, Business Desktop Deployment, which was the predecessor of the MDT. Use the following script to load the snap-in:

Add-PSSnapIn Microsoft.BDD.PSSnapIn

There was also a new version of the MDT launched recently: MDT 2012 Beta 2. Although the fundamentals of the MDT 2012 Beta 2 are the same as MDT 2010, there is one difference in accessing the Windows PowerShell cmdlets, even though the Windows PowerShell cmdlets are currently the same in the beta. The Windows PowerShell cmdlets are now located in a module. Use the following script to load the cmdlets with the MDT 2012 Beta 2:

Import-Module "C:\Program Files\Microsoft Deployment Toolkit\bin\MicrosoftDeploymentToolkit.psd1"

Create a deployment share

Overall, the cmdlets that work with MDT allow you all the functionality to create your deployment environment. The first thing you will do in the MDT is create a Deployment Share. The Deployment Share is the key to the entire MDT environment. All of the resources you use to build your deployment images will be placed and stored in the MDT Deployment Share. Use the following script to create a Deployment Share:

new-PSDrive -Name "DS002" -PSProvider "MDTProvider" -Root "C:\DeploymentShare" -Description "MDT Deployment Share" -NetworkPath "\\2008R2DEP\DeploymentShare2$" -Verbose | add-MDTPersistentDrive -Verbose

Create a Windows PowerShell drive

After the Deployment Share is created, you can create a Windows PowerShell drive to reference in the other commands for the MDT process:

New-PSDrive -Name "DS002" -PSProvider MDTProvider -Root "C:\DeploymentShare"

Add a task sequence

The following cmdlets, which add an operating system, driver, and task sequence will all leverage the Windows PowerShell drive:

#Add the operating system

import-mdtoperatingsystem -path "DS002:\Operating Systems" -SourcePath "D:\Deploymentshare\Operating Systems\Windows 7 x64" -DestinationFolder "Windows 7 x64" -Verbose

#Add the driver

import-mdtdriver -path "DS002:\Out-of-Box Drivers" -SourcePath "D:\Drivers" -ImportDuplicates -Verbose

#Add the task sequence

import-mdttasksequence -path "DS002:\Task Sequences" -Name "Corporate Windows 7 Image" -Template "Client.xml" -Comments "This will deploy Windows 7 to all the desks" -ID "dep7" -Version "1.0" -OperatingSystemPath "DS001:\Operating Systems\Windows 7 ULTIMATE in Windows 7 x64 install.wim" -FullName "Windows User" -OrgName "Contoso" -HomePage "www.bing.com" -AdminPassword "pass@word1" -Verbose

#Update the Deployment Share

update-MDTDeploymentShare -path "DS002:" –Verbose

Image of Boot window

Update a Deployment Share

Although updating the Deployment Share is a simple Windows PowerShell command, it does preform a crucial task in the deployment process. Updating the Deployment Share is actually going to configure the Deployment Share with the boot environment. This environment contains all the common files and scripts to build your custom image. This process will also create two preinstallation (PE) WIM files (x86 and x64). The PE environment is the installation platform that will begin the installation process and allow your installation to access the Deployment Share for your applications, drivers, operating systems, and so on.

Although the files that are created during the update share process are extremely portable, one of the advantages is combining the WIM files with the Windows Deployment Services (WDS). WDS is a built-in role for your servers running Windows Server 2008 R2. The main reason you will want to use WDS is the built in PXE support. This allows your servers to accept network boot requests to deploy your images. The MDT PE image can be placed in the WDS share so you can accept the PXE boot requests, allow access to the PE WIM file, and then access the remaining resources in the Deployment Share.

Use WDSUTIL to update images

Unfortunately, WDS does not support Windows PowerShell. However, WDS has a great command prompt tool called WDSUTIL. While WDSUTIL is a command prompt tool, it does wish it could be a Windows PowerShell tool, and it is has pseudo Windows PowerShell syntax. If you look at the following WDSUTIL command, you can see what I mean with the Add-Image switch. By-the-way, the following command adds the PE WIM file to the WDS server. This provides WDS with the ability to take PXE boot requests and access the files created in the MDT to deploy an image.

wdsutil /Verbose /Progress /Add-Image /ImageFile:C:\DeploymentShare\Boot\LiteTouchPE_x64.wim /ImageType:Boot

As you can see, by using these free tools and Windows PowerShell scripts, you can quickly build and create your MDT environment. You can also take the power of the MDT and combine it with the WDS to have a lite-touch deployment platform. This allows you to quickly build and create a solid platform to deploy your corporate standard images in your environment.

Resources

~Matt

Thank you, Matt, for sharing your time and knowledge. Join us tomorrow for the first post of 2012. Happy New Year to you all.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Discover the Top Ten Scripting Guy Blog Posts from 2011

$
0
0

Summary: Maybe you missed them; maybe you just need a quick review. Here are the top ten Scripting Guy blog posts from 2011.

Microsoft Scripting Guy, Ed Wilson, is here. Happy New Year! I do not know about you, but for the Scripting Wife and I, the year named 2011 absolutely flew by. We were really fortunate to have the opportunity to talk to lots of scripters from coast-to-coast in the United States—beginning with the MVP summit in Seattle, Washington and ending with the Pittsburgh PowerShell Users group in Pittsburgh, Pennsylvania. In the photo that follows, you’ll see Windows PowerShell MVP, Kirk Munro; Microsoft Scripting Guy, Ed Wilson; Scripting Wife, Teresa Wilson; and Windows PowerShell MVP, Shane Hoey standing around after breakfast and prior to the opening ceremonies of the Microsoft MVP Summit.

Photo

For the Scripting Wife and I, the new year will begin on January 4 with a live lunch meeting for the Virtual PowerShell Users Group. Our first in-person appearance will be on January 5 at the Charlotte Microsoft Office in North Carolina for the Charlotte Windows PowerShell Users Group. The president and founder of that user group, Microsoft MVP, Jim Christopher, has worked really hard to make this (their first meeting) a successful one. It will be an absolute blast, and we are looking forward to it.

It seems that in addition to looking forward, January is also a time to look back at the previous year. I was going over the numbers of blogs that I wrote this year, and I decided it would be cool to share the top ten Hey, Scripting Guy! blog posts of 2011. This will serve two purposes: the first is that it is possible you missed the blogs, and so it gives you a chance to catch up a bit with your reading. The second purpose is that the popularity of blogs provides a bit of insight into what readers of this column find useful and important. This does not cover blogs that were written by guest bloggers, nor does it cover blogs from prior years.

The number one Hey, Scripting Guy! blog post of 2011 is not even really a blog—it is the 2011 Scripting Games All-In-One page. As you may recall, the 2011 Scripting Games were the most successful ever, and the popularity of this page is a tribute to this success. One thing to note, is that this page continues to garner numerous hits every month because people use this page and the 2011 Scripting Games materials for private study and reference. When you begin your preparation for the 2012 Scripting Games, begin with this page. Participation in the 2012 Scripting Games makes a GREAT New Year’s resolution!

The number two Hey, Scripting Guy! blog post was a Quick Hits Friday blog that talked about installing Windows PowerShell on Windows XP, but it also had a great piece about troubleshooting a script that copied files. The continuing popularity of Windows XP and the phenomenal success of Windows PowerShell 2.0 make a great combination, and that provides insight into the popularity of this page. If you already have Windows PowerShell 2.0 installed, take a look at the piece about troubleshooting…it is cool in its own right.

Number three in our hit parade of Hey, Scripting Guy! blog posts is another Quick Hits Friday blog called How Do I Install PowerShell on Windows 7 and Other Questions. It is interesting that Windows 7 ships with Windows PowerShell 2.0 already installed. Unfortunately, it is “hidden” under the All Programs/Accessories/Windows PowerShell folder off the Start button. Of course, once you have Windows PowerShell 2.0 installed, you might need to know if you are running the 32-bit or 64-bit version, or you might want to know how to read an offline registry file, or how to work with security logs. All of these topics are covered in this very popular blog.

The number four Hey, Scripting Guy! blog post for 2011 is Use Scheduled Tasks to Run PowerShell Commands. Windows PowerShell and scheduled tasks seem to go together like peanut butter and chocolate (one of my favorite combinations). Smart Windows admins automate tasks, and scheduled tasks are a key component to that automation. What is so great about Windows PowerShell with scheduled tasks? So many Windows PowerShell commands are one liners, so they plug right into the command block on a scheduled task. Perfect!

Rounding out the top five blogs for 2011 talks about using Windows PowerShell to filter event logs. This is an excellent post, if I do say so myself. But it is not just me—lots of faithful HSG readers voted with their mouse and moved this into the top five blogs of the year. Why do I like this blog so much? Well, back when I was a network administrator, I spent the first hour of each day (assuming that I did not have an emergency to deal with) reviewing the event logs on my servers. It was a time consuming and tedious task, but it helped me learn a great deal about what was going on with the network. In addition, I soon learned that certain events are indicators of impending disasters. This has not changed—if anything, logs are even more important. However, the sheer number of entries and the numbers of servers to manage has made manual checks increasing futile. This is where Windows PowerShell appears. Check out the blog, it will change the way you work.

Interestingly enough, both the number 4 blog and the number 5 blog feature a picture of Dr. Scripto (one in the snow and one on the beach). I wonder if there is a pattern here.

The number six Hey, Scripting Guy! blog post  talks about one of my favorite features in Windows Server 2008 R2, and that is the ability to use Windows PowerShell and Group Policy for a logon script. It is a cool technique, and a way cool blog.

At number seven, we have a blog that talks about adding a progress bar to a Windows PowerShell script. This is some bread-and-butter type information; it is a useful technique to add to your bag of tricks. In fact, the progress bar is amazingly flexible, and much more capable than a simple progress bar. I wrote about it for an entire week.

The number eight Hey, Scripting Guy! blog of 2011 is one that I wrote about the top ten mistakes I saw during week one of the 2011 Scripting Games. This blog easily becomes a sort of worst practices. Or turn it around, the ten things I suggest in this blog will take your scripting to the next level…immediately!

The number nine Hey, Scripting Guy! blog of 2011 is the first installment of my top ten favorite Windows PowerShell tricks. This blog grew out of repeated questions that I heard when I was speaking at various user group meetings and conferences. These questions would usually take the form of, “What is your favorite Windows PowerShell trick or tip that you can give me?” When I started writing this blog, it was going to be a single list of ten items. But it quickly dawned on me that the techniques and tricks were more important than the simple list of items, so I decided to illustrate why each technique was one of my top ten favorite tricks. Taken together, these three blogs will make you much more productive.

Rounding out the top ten Hey, Scripting Guy! blogs of 2011 is a blog that talks about using Windows PowerShell and WMI to obtain processor information. Seems to me that this blog is not that great—but hey, who am I to argue with success? This blog is a basic use Windows PowerShell and WMI to find cool things sort of blog. I guess the reason it is so popular is that everyone at one time or another must obtain certain information from the CPU. I mean, a computer is not too useful without one, is it?

Well, there you have it, the top ten Hey, Scripting Guy blog posts from 2011. Tomorrow, I will review the top ten community submitted scripts to the Scripting Guys Script Repository.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Find the Top Ten Scripts Submitted to the Script Repository

$
0
0

Summary: Learn which scripts submitted by the community to the Scripting Guys Script Repository rank in the top 10 for 2011.

NOTE: Tomorrow I will be speaking at the first meeting of the Madison PowerShell Users Group via live meeting. If you are local to Madison Wis. please join the group in person or remotely through this link.

 

Microsoft Scripting Guy, Ed Wilson, is here. Today, I want to look at the top ten community scripts that were submitted to the Scripting Guys Script Repository in 2011. With more than 6100 scripts, the Scripting Guys Script Repository can be a bit daunting when it comes to browsing scripts. The figure that follows is the welcome screen from the Script Repository. By the way, every quarter we add new features to the Script Repository. Recently we added the Request a script feature that you can get to via the Browse Script Requests link in the right pane.

Image of Script Center

Without spoiling the plot, I can tell you that nine out of the ten top community submitted scripts use Windows PowerShell. So here are the details…

Microsoft Windows PowerShell MVP and honorary Scripting Guy, Sean Kearney submitted the number one script for 2011. His script, List Inactive Computer Accounts in Active Directory is a very short script that lists computers that have not been active within a specified number of days. The script defaults to 90 days, but it is configurable.
Sean's blog:  PowerShell: Releasing the Power of Shell to You 

The number two script submitted to the Script Repository is Create an Active Directory User Account. It is a Windows PowerShell script that creates an Active Directory user account that includes the mailbox, home directory with permissions, and profile directory. The script is a solid script submitted by Martijn Haverhoek.

The number three script, submitted by SharePoint MCM, Ram Gopinathan, is a Script to Install SharePoint 2010 on Windows 7. This useful script automatically downloads and installs several prerequisite files, and then configures and installs SharePoint 2010. The script only works on Windows 7, but it is useful.
Also check out Ram’s blog.

At number four, we have a script by Greg Lyon that uses Windows PowerShell to back up files. The Backup Files Using Windows PowerShell script is made up of several functions, and it prompts prior to commencing the backup. To use the script, you need to manually edit the source and destination variables to match your configuration.

The number five script is an interesting script written by Mohamed Garrana, which invokes cmd.exe types of commands on remote computers. The run remote cmd.exe commands script uses the Invoke-WmiMethod cmdlet from Windows PowerShell 2.0.

Checking in at number six is a simple one-liner written by Kent Finkle called Create a Folder Using Windows PowerShell. Of course, creating a single folder in Windows PowerShell does not require a “script.” In fact, it can be accomplished as simply as typing md myfoldername—but, hey, this script had a huge number of page views this year.

The number seven script submitted to the Scripting Guys Script Repository is the Client System Administration tool (v1.0.2). This way cool Windows PowerShell script submitted by Rich Prescott, is more than 1,600 lines long, and it features a nice graphical user interface. What is truly amazing about this script is that it has not been live very long—in other words, it has quickly risen to the number seven script position. On January 6, Rich Prescott will be our guest blogger, and you can read his explanation of this script. The blog is excellent; I’ve already read it.
Rich's blog: Engineering Efficiency: Scripts, Tools, and Software News in the IT World 

At number eight, we have the List Group Members in Active Directory script written by Microsoft Directory Services MVP, Santhosh Sivarajan. This excellent script had a great following in 2011.
Santhosh's blog: Santhosh Sivarajan's Blog 

The number nine community submitted script is called Automate RunAs Password Entry, and it is actually a VBScript script. This script uses the sendkeys method from the wshshell object to automatically type in the password to use when calling runas. It also has examples of opening Internet Explorer and navigating to a web page. The script was written a couple of years ago, and it obviously is something people are interested in. Unfortunately, as written, the script is not a very good idea, and the reliance on using sendkeys is somewhat error prone. Alternatively, there are a number of Hey, Scripting Guy! blogs that talk about working with the Internet. For working with passwords, you might take a look at Importing and Exporting Credentials in PowerShell, which is a great blog written by Lee Holmes.

Rounding out the top ten Scripting Guys Script Repository community submitted scripts for 2011 is the Enumerate Active Directory User Object Information script written by Trevor Hayman. The script retrieves a ton of information about users in Active Directory. The script does more than simply query AD, so maybe you should check out why it has received 17 five-star ratings.

I want to congratulate each person who has a script that is listed here today. If I did not include a link to your blog, please contact me via the scripter@microsoft.com email address. Join me tomorrow when I will talk about the top ten Scripting Wife blog posts from 2011. The Scripting Wife blogs, as you may recall, chronicle the experience of a non-computer professional who decided to learn Windows PowerShell to compete as a beginner in the Scripting Games. They are funny, and they are a great way to learn Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Top Ten Scripting Wife Blogs of 2011 Show PowerShell Skills

$
0
0

Summary: Read the top ten Scripting Wife blog posts from 2011 to obtain a strong foundation in essential Windows PowerShell skills.

Microsoft Scripting Guy, Ed Wilson, is here. The top ten Scripting Wife blogs from 2011 might be indicative of the things that Windows PowerShell scripters are interested in learning. They may also simply tell how many times the blog was retweeted.

NOTE: Tonight I will be speaking at the first meeting of the Madison PowerShell Users Group via live meeting. If you are local to Madison Wis. please join the group in person or remotely through this link.

The Scripting Wife blogs chronicle the experience of a non-IT professional, who is, never-the-less, computer literate, as she learns Windows PowerShell from the ground up. Having had no experience in previous scripting languages, the adventures of the Scripting Wife as she wrestles with basic Windows PowerShell concepts has proven to be useful to thousands of people who desire to learn Windows PowerShell. The blogs should be read in the order in which they were written; in that way, you will be able to follow along with the Scripting Wife as she learns to use Windows PowerShell.

At any rate, here are the top ten Scripting Wife blogs from 2011...

The number one Scripting Wife blog post is the one where she uses Windows PowerShell to shut down computers. This blog covers using the Stop-Computer cmdlet to shut down all the computers on the network in response to power outages. This was a very useful technique due to the massive spring thunderstorms we received and the corresponding power outages. For whatever reason, knowing how to shut down computers is useful.

The number two Scripting Wife blog post talks about using Windows PowerShell to automatically update the Sysinternals tools. This was a really cool blog, and the technique for updating installed software applies to more than just the Sysinternals tools. Everyone needs the ability to update software, and knowing how to do it via Windows PowerShell is icing on the cake.

The number three Scripting Wife blog post uses Windows PowerShell to get lines from a file. The Scripting Wife uses text files as a free form database, and she often needs to be able to obtain specific information from those files. This is a fascinating blog, and it illustrates a very valid technique for working with text files. Check it out, you will be glad you did.

The number four Scripting Wife blog post discusses the rather annoying problem of blocked files that come from the Internet. In Scripting Wife Learns About Unblocking Files in PowerShell, the Scripting Wife runs into a problem using the PowerShell Community Extensions module she downloaded from CodePlex. The problem was that she had not unblocked the ZIP file that she downloaded; and therefore, every file in the package needed to be unblocked before the module would work properly. From the email I received, and from comments on the blog post, the Scripting Wife was not alone in facing this problem.

The number five Scripting Wife blog post chronicles her prep work for the 2011 Scripting Games as she works on learning how to format output in Windows PowerShell. It is not too long after firing up Windows PowerShell that one needs to know how to format output from within code. The nice thing about Windows PowerShell is that it makes it very easy to create tables, lists, or other types of output. This blog covers a core skill for working with Windows PowerShell.

The number six Scripting Wife blog post is another 2011 Scripting Games prep article, this time she learns how to use the Out-GridView Windows PowerShell cmdlet. The Out-GridView cmdlet makes it easy to perform ad hoc analysis of data. It supports multiple filters, and it allows you to reorganize the data on-the-fly. This is a great tool for any network admin, and the Scripting Wife shows how easy it is to use this tool.

The number seven Scripting Wife blog post finds her still working on getting ready for the 2011 Scripting Games. This time, she is learning how to create files automatically via Windows PowerShell. Everyone has to know how to work with files. Come see how the Scripting Wife does it.

The number eight Scripting Wife blog post covers a bread-and-butter topic of matching strings via a simple regular expression pattern. When the Scripting Wife saw that she needed to work with regular expressions, she was all in a tizzy. But it was not that bad; in fact, it was rather easy. Now she will tell you, “A regular expression around our house is ‘Can it Script Monkey.’” But I am not certain that really applies.

The number nine Scripting Wife blog post discusses adding a cool function to a Windows PowerShell profile. It was written on the day the Scripting Wife signed us up to attend the Bouchercon Mystery Writers conference…and therefore, I appeared wearing a trench coat and fedora.

The number ten Scripting Wife blog post contains more information about Windows PowerShell profiles, with another picture of me in a trench coat and hat. Maybe the trench coat is what made these blogs so popular…then again, maybe not. But just in case, here is a picture with me in my trench coat and fedora.

Join me tomorrow when I will talk about the top ten questions from the Scripting Guys forum. You will like it…trust me. See you then.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Discover the Top Ten Scripting Guys Forum Questions for 2011

$
0
0

Summary: What do scripters need to know? Check out the top ten questions posted on the Scripting Guys Forum to find out.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to look at the top ten questions on the Scripting Guys Scripting Forum. The Scripting Guys Forum is a great place to ask questions about VBScript or Windows PowerShell. I really appreciate the hard work of the moderators as they keep up with a massive number of questions from numerous people. As shown in the image that follows, the Official Scripting Guys Forum gets lots of questions, and those questions generate a significant number of views.

Image of Scripting Guys Forum page

So why look at the top questions on the Scripting Guys Forum? Well for one thing, it will show you what scripters are interested in. For another, there is a good chance that if someone else has a problem, it might be the answer to a problem you are experiencing. So check it out. In addition to finding specific answers to specific questions, the forum itself it a great way to learn scripting. I even wrote about it once.

The number one question on the Scripting Guys Forum is not actually a scripting question; it is Where is the hyper terminal application in Windows 7? The smart-alec answer, is that it is at the same place it was in Windows Vista—a better answer is that it is no longer available in the operating system. Check out the forum question and answer. Or, perhaps I should say, answers. This is a long thread and the readers post numerous solutions, in addition to interesting background information and links. Cool stuff.

The number two question on the Scripting Guys Forum is a scripting question; in fact, it is a Windows PowerShell question about appending to CSV files. In part, due to this question and to other questions I have received, I wrote an entire week’s worth of Hey, Scripting Guy! blogs about working with CSV files. In fact, there have been several blog posts about working with CSV files from Windows PowerShell. Check out the Hey, Scripting Guy! blogs, and also look at the forum question because there are several excellent answers about appending to a CSV file when working with Windows PowerShell. Good stuff.

The number three question on the Scripting Guys Forum is about a batch file that pings multiple computers. The original poster wanted to understand how the file actually works, and our awesome forum members jumped in with a line-by-line explanation. This type of post is great, and it is helpful for people who need to learn batch scripting, especially for those occasions when Windows PowerShell is not available.

The number four question is a VBScript question about the useraccounts.commondialog' Active X component not working in Windows Vista or in Windows 7. This is a very specific problem, and luckily, there is a very specific answer. Check it out if you run into this problem.

The number five question on the Scripting Guys Forum is also a VBScript problem, but it deals with permission denied when working with WMI on a remote computer. It turns out that there is a registry key that needs to be modified. This is an interesting thread that could just as well be a problem when using the Get-WmiObject cmdlet from within Windows PowerShell.

The number six question deals with attempting to prompt for and hide a password from within a batch script. The conversation in the thread is lively. If possible, I would prefer to use Windows PowerShell, because I can easily use the Read-Host cmdlet. Here is a Hey, Scripting Guy! blog about Masking Passwords in Windows PowerShell.

Now, this is interesting…The number seven question on the Scripting Guys Forum is about Using Task Scheduler for a power script on Windows Server 2008. Why is that interesting? Well for one thing, on New Year’s day, I reviewed the top ten Hey, Scripting Guy! blog posts of 2011, and guess what? One of the top blogs was about using the Task Scheduler, and now we see Task Scheduler showing up again as a topic. Luckily, I have written several blogs that talk about working with scheduled tasks and Windows PowerShell. Some of the blogs are really, really good. I will leave it to you to figure out which ones are most excellent.

The number eight question on the Scripting Guys Forum is also a Windows PowerShell question. This time, the question also involves Active Directory. The question is, How can I update thumbnailPhoto AD attribute with Windows PowerShell? This is a good question. Luckily, one of the regular readers happened to have a script that accomplishes this task. Cool.

At number nine, we have another VBScript question…but the answer could easily translate to Windows PowerShell. The question is How do I empty the recycle bin for all users? I know I spent several hours once upon a time writing a VBScript script to do this very thing. Luckily, one of the moderators was able to point the way to a useful solution…and save you a few hours’ work. This is great stuff; check it out.

And rounding out our top ten list for questions on the Official Scripting Guys Forum, is a question about using Windows PowerShell to rename computer accounts. There is a lively discussion and several sensible solutions. Check it out; you will be glad you did.

I am issuing a special invitation to join our scripting community by engaging with fellow scripters via the Official Scripting Guys Forum. There is always a lot of excellent discussion, and it is fun to post answers and follow the resulting conversation.

Join me tomorrow when I have a guest blog written by Microsoft PFE, Ashley McGlone, about using Windows PowerShell to work with Active Directory schema updates. It is an excellent blog that you do not want to miss.  

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

How to Find Active Directory Schema Update History by Using PowerShell

$
0
0

Summary: Use Windows PowerShell to discover what schema updates have been applied to your Active Directory environment.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have as our guest blogger, Ashley McGlone. Ashley is a premier field engineer for Microsoft. He started writing code on a Commodore VIC20 back in 1982, and he’s been hooked ever since. Today he specializes in Active Directory and PowerShell, helping Microsoft Premier Customers reach their full potential through risk assessments and workshops. Ashley’s favorite workshop to teach is Windows PowerShell Essentials, and his TechNet blog focuses on using Windows PowerShell with Active Directory.
Blog: Goatee PFE
Twitter: @GoateePFE

Take it away Ashley…

Where am I? How did I get here?

Marvel X-Men fans know that Wolverine's character is interesting because of his mysterious past. Those unfamiliar with the comics had to wait until the Wolverine movie to find out exactly why he couldn't remember where he came from. After seeing the movie, I thought he's better off not knowing the tortured past.

Some Active Directory (AD) admins are a bit like Wolverine…razor claws aside. They have hired into an IT shop where the former admin is nowhere to be found, and they need help finding out the mysterious past of their AD environment. What schema updates have been applied? Where has delegation been granted? And why is there a user account called "DO NOT DELETE"?

Today's post offers some simple scripts to document the history of schema updates. This is particularly handy when it comes time to extend the schema for a domain upgrade or Exchange implementation. Now you can get a report of every attribute's create and modified date. You can also find out if and when third-party extensions have been applied.

When did all this happen?

To report on schema updates, we simply dump all of the objects in the schema partition of the Active Directory database and group by the date created. This script does not call out updates by name, but you can infer from the schema attributes that are listed which update was applied. For example, if you see a day with a bunch of Exchange Server attributes added, then that was one of the Exchange Server upgrades or service packs. The same is true for AD forest preps, OCS/Lync, SMS/SCCM, and so on. Then based on the affected attributes and dates, you can extrapolate the product version involved.

It is entirely possible that later schema updates modified previously created attributes. Note that the Windows Server 2008 R2 forest prep hits nearly every attribute in the database when it adds the Filtered Attribute Set (FAS) for RODCs. As a result, we cannot trust the WhenModified attribute to show us a true history. Therefore, in the report, we use the WhenCreated attribute and show the WhenModified date for added flavor.

Windows PowerShell

Although this code is not much more than a Get-ADObject, I want to look at the two different grouping techniques. Get-Help provides the following information:

Format-Table -GroupBy

Group-Object

Arranges sorted output in separate tables based on a property value. For example, you can use GroupBy to list services in separate tables based on their status. The output must be sorted before you send it to Format-Table.

The Group-Object cmdlet displays objects in groups based on the value of a specified property. Group-Object returns a table with one row for each property value and a column that displays the number of items with that value.

Notice in the output that Format-Table -GroupBy shows you the data inside each grouping, while Group-Object gives you a count of the items within the grouping. This is an important distinction, and most folks aren't aware of this little switch with Format-Table. Also, note that Group-Object creates its own column names (Count, Name, Group).

Import-Module ActiveDirectory

$schema = Get-ADObject -SearchBase ((Get-ADRootDSE).schemaNamingContext) `
-SearchScope OneLevel -Filter * -Property objectClass, name, whenChanged,`
whenCreated | Select-Object objectClass, name, whenCreated, whenChanged, `
@{name="event";expression={($_.whenCreated).Date.ToShortDateString()}} | `
Sort-Object whenCreated

"`nDetails of schema objects changed by date:"
$schema | Format-Table objectClass, name, whenCreated, whenChanged `
-GroupBy event -AutoSize

"`nCount of schema objects changed by date:"
$schema | Group-Object event | Format-Table Count, Name, Group –AutoSize

The following image illustrates the schema objects with the date that they were created and when they changed.

Image of schema objects

The image shown here illustrates a total count of the schema objects created by date.

Image of schema objects

Your results will appear much more interesting than these from my sterile lab environment.

Was your forest really created in the year 1630?

When I first wrote this script, I assumed that the oldest attribute date in the schema report would be the creation date of the forest. That was a wrong assumption. After testing this code in a number of different environments, I found that all forests created on Windows Server 2008 R2 shared a common date in 2009 for the oldest created schema attribute. To make things even more interesting, forests created on Windows 2000 Server show dates from the year 1630 on their oldest attributes. I knew this couldn't be correct, so I had to find out where the dates originated.

The answer lies in the DCPROMO process. When you promote a new domain controller, it creates the database file from a template like the one shown here:

Template database

%systemroot%\System32\NTDS.dit

Default install location

%systemroot%\NTDS\NTDS.dit

Here is a quote from the TechNet topic How the Active Directory Installation Wizard Works:

"When you install Active Directory on a computer that is going to be the root of a forest, the Active Directory Installation Wizard uses the default copy of the schema and the information in the schema.ini file to create the new Active Directory database."

As a result, the WhenCreated dates of the initial schema attributes when a forest is built come from the template database, and they are not valid values. Ignore them.

How to find the forest creation date

To locate the actual installation date of the forest (and all of the domains), we can query the CrossRef objects in the Configuration partition. The applicable objects seen in ADSI Edit are shown in the following image.

Image of objects

The following script shows how to find these CrossRef objects.

Import-Module ActiveDirectory

Get-ADObject -SearchBase (Get-ADForest).PartitionsContainer `
-LDAPFilter "(&(objectClass=crossRef)(systemFlags=3))" `
-Property dnsRoot, nETBIOSName, whenCreated |
Sort-Object whenCreated |
Format-Table dnsRoot, nETBIOSName, whenCreated -AutoSize

In the query, we specify that we only want CrossRef objects with a SystemFlags value of 3, which includes all partitions that are domains (excluding other partitions like DNS). Now we have a list of all domains in the forest and their creation date. Obviously, the root domain is the oldest, and it represents the forest creation date. Here is a screenshot from my lab:

Image of command output

Although this data does not come from the schema partition, it is a quick and reliable way to know when the forest domains were created.

How can I know the current product versions from schema data?

The next logical question after looking at the schema report is, "What is my current forest schema version?" This one is easy to answer with another simple Get-ADObject query. But why stop there? Let's also grab the Exchange Server and Lync versions of the schema as follows.

#------------------------------------------------------------------------------

Import-Module ActiveDirectory

$SchemaVersions = @()

$SchemaHashAD = @{
13="Windows 2000 Server";
30="Windows Server 2003";
31="Windows Server 2003 R2";
44="Windows Server 2008";
47="Windows Server 2008 R2"
}

$SchemaPartition = (Get-ADRootDSE).NamingContexts | Where-Object {$_ -like "*Schema*"}
$SchemaVersionAD = (Get-ADObject $SchemaPartition -Property objectVersion).objectVersion
$SchemaVersions += 1 | Select-Object `
@{name="Product";expression={"AD"}}, `
@{name="Schema";expression={$SchemaVersionAD}}, `
@{name="Version";expression={$SchemaHashAD.Item($SchemaVersionAD)}}

#------------------------------------------------------------------------------

$SchemaHashExchange = @{
4397="Exchange Server 2000 RTM";
4406="Exchange Server 2000 SP3";
6870="Exchange Server 2003 RTM";
6936="Exchange Server 2003 SP3";
10628="Exchange Server 2007 RTM";
10637="Exchange Server 2007 RTM";
11116="Exchange 2007 SP1";
14622="Exchange 2007 SP2 or Exchange 2010 RTM";
14726="Exchange 2010 SP1";
14732="Exchange 2010 SP2"
}

$SchemaPathExchange = "CN=ms-Exch-Schema-Version-Pt,$SchemaPartition"
If (Test-Path "AD:$SchemaPathExchange") {
$SchemaVersionExchange = (Get-ADObject $SchemaPathExchange -Property rangeUpper).rangeUpper
} Else {
$SchemaVersionExchange = 0
}

$SchemaVersions += 1 | Select-Object `
@{name="Product";expression={"Exchange"}}, `
@{name="Schema";expression={$SchemaVersionExchange}}, `
@{name="Version";expression={$SchemaHashExchange.Item($SchemaVersionExchange)}}

#------------------------------------------------------------------------------

$SchemaHashLync = @{
1006="LCS 2005";
1007="OCS 2007 R1";
1008="OCS 2007 R2";
1100="Lync Server 2010"
}

$SchemaPathLync = "CN=ms-RTC-SIP-SchemaVersion,$SchemaPartition"
If (Test-Path "AD:$SchemaPathLync") {
$SchemaVersionLync = (Get-ADObject $SchemaPathLync -Property rangeUpper).rangeUpper
} Else {
$SchemaVersionLync = 0
}

$SchemaVersions += 1 | Select-Object `
@{name="Product";expression={"Lync"}}, `
@{name="Schema";expression={$SchemaVersionLync}}, `
@{name="Version";expression={$SchemaHashLync.Item($SchemaVersionLync)}}

#------------------------------------------------------------------------------

"`nKnown current schema version of products:"
$SchemaVersions | Format-Table * -AutoSize

#---------------------------------------------------------------------------><>

I've included a number of links to articles that document these schema versions and locations at the end of this post. Here is an example of the output:

Image of command output

By using the previous template code, you can add additional schema version checks for other product extensions in your environment.

This blog is for all IT Pros who have inherited an Active Directory environment that they did not build. Now you have some insight on the origins of your directory. While you may not have adamantium fused to your skeleton, you can now use AD-PowerShell-ium to understand a bit of your broken past.

Additional resources

The full script can be found on the Script Repository.

~Ashley

Thank you, Ashley, for taking time to write the guest blog today and sharing your insights with our readers. Join us tomorrow when guest blogger, Rich Prescott, will talk about the Windows PowerShell community and the sysadmin tool. It will be another excellent guest blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerShell Community and the Windows System Administration Tool

$
0
0

Summary: See how guest blogger, Rich Prescott, leveraged the Windows PowerShell community as he built his popular ArPosh Client System Administration tool.

Microsoft Scripting Guy, Ed Wilson, is here. We are really starting the new year off correctly. We have another very strong guest blogger today. Rich Prescott, is currently working as an infrastructure architect and Windows engineer at a large media company, with his main areas of focus on automation and Active Directory. The automation responsibilities allow him to work with a wide-range of technologies including virtualization, storage, and monitoring. He started learning Windows PowerShell in 2009 while he was working as a desktop engineer, and he is the lead scripting guy for his organization. He is also a moderator on the Official Scripting Guys Forum, and was recently rewarded as a Microsoft Community Contributor.

Blog: Engineering Efficiency: Scripts, Tools, and Software New in the IT World
Twitter: @Rich_Prescott

Take it away Rich…

When I first joined the IT world, I was working at the Help desk for a large company. Being new to IT and troubleshooting, I always wondered why there was no single tool for troubleshooting remote computers. I would receive a call about a computer being slow, and I would have to go through four tools to try to diagnose the issue. I began searching for an easy solution, and that is when a coworker introduced me to Windows PowerShell. After reading blog posts from Hey, Scripting Guy! and the Scripting Wife, I was writing basic scripts to gather the information that I needed for troubleshooting. This is an example of a basic script I wrote back then.

$PC = “PC01”
Get-WmiObject Win32_OperatingSystem –ComputerName $PC
Get-WmiObject Win32_StartupCommand –ComputerName $PC
Get-WmiObject Win32_Process –ComputerName $PC

After the underlying issue was targeted, I needed a way to remediate the issue, such as removing a hung process. This was possible remotely by querying the Win32_Process class on a remote computer, filtering for the process that was causing an issue, and invoking the Terminate method. Here is an example of using the Terminate method to stop a process on a remote computer.

(Get-WmiObject Win32_Process –ComputerName $PC |
Where-Object {$_.Name –eq “HungProcessName”}).Terminate()

At last, I was able to efficiently troubleshoot issues remotely and even resolve some of them without ever having to leave my desk. As I became more familiar with Windows PowerShell, I thought, “Wouldn’t it be cool if I could share the efficacy of Windows PowerShell with others, even if they don’t know any scripting?” After a few months of researching, scripting, and testing GUI creation with Windows PowerShell, the Arposh Client System Administration tool (ACSA) was released.

As with any technology that you learn rapidly, when you look back on what you were doing a year ago, you think to yourself, “What was I thinking when I wrote that? I could write that in half the code and make it twice as fast.” So I set out to find resources to help me rebuild the script from scratch, and I used this opportunity to remove some of the prerequisites, make it compatible with servers, and improve the overall user experience.

PowerShellGroup - #PowerShell chat room


One of the advantages of using Internet Relay Chat (IRC) is that there are chat rooms for almost any topic you can think of and when you are learning a new technology, having a live discussion can be really helpful. While revamping the ACSA tool, one of the user-experience features that I wanted to add was to give the user the ability to decide which feature sets to use through a configuration file.

I joined the #PowerShell chat room on PowerShellGroup.org, and I asked about an easy way to give users configuration options. Jaykul, a Windows PowerShell MVP, responded with a way to read settings from an XML file. By using the following XML code and three lines of Windows PowerShell, a user is able to set a default domain to connect to when using the GUI. To have the GUI use the current domain of the logged on user, simply update the Enabled option of the default domain to “False” in the XML configuration file.

<?xml version="1.0" standalone="no"?>
 <Domain Default="LDAP://DC=RU,DC=lab" Enabled="True"/>

In the first line of code, we use Get-Content on the XML configuration file and specify that it is XML code by using [XML]. We then check to see if the default domain option is enabled, and if so, we set the Domain variable to what is specified in the XML. If the option is disabled, the GUI sets the Domain variable to the currently logged on domain.

 [XML]$XML = Get-Content “AWSA.Options.xml”
 if($XML.Domain.Enabled –eq $True){$Domain = $XML.Domain.Default}
 else{$Domain = ([DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()).Name}

TechNet Wiki

My next task was to make the GUI easy to use, and this meant removing any unnecessary pre-requisites included in the original release. My first stop was the TechNet Wiki to find a way to query Active Directory for computers without the need to import a Windows PowerShell module that was not freely available on all systems. After a quick search, I landed on a wiki contribution from Richard Mueller, another MVP, for ADSI searches, which showed me the syntax necessary to build my custom function.

 function Get-RPADComputer{
  if($ComputerName -match "."){$ComputerName = $ComputerName.Split('.')[0]}
  $searcher=[adsisearcher]"(&(objectClass=computer)(name=$ComputerName*))"
  $Properties = $XML.Options.Search.Property
  $searcher.PropertiesToLoad.AddRange($Properties)
  $searcher.FindAll()       
 }

This function checks the $ComputerName parameter that is specified in the textbox of the GUI, and if a fully qualified domain name (FQDN) is specified, it translates it to a short name. It then creates a query that will search for any computers that match the new $ComputerName variable. The third line reads a list of properties that the user specifies in the XML settings file and adds them to the list of properties to load. The final line of the function executes the query and returns the results.

TechNet forums


After adding in the ability to set a default domain when launching the GUI, the user needed a way to change domains without having to edit the XML file and relaunch the script. I was not familiar enough with the ADSI scripting techniques that I found on the TechNet Wiki, and I decided to head over to the Official Scripting Guys Forum on the TechNet Forums and ask for some assistance.

I posted my question with some examples of the input and output that I wanted, and within four hours, I received multiple responses from the community. And what do you know—Richard Mueller again came to the rescue with a way to prompt for a domain and then convert the response into the LDAP path for that domain. By slightly tweaking his code, I was able to add the ability to search alternate domains to my custom Active Directory computer search function.

TechNet ScriptCenter


There are many new features in this update of the GUI, but I want to highlight one that many administrators will find very useful and sigh at the mention of: local administrator rights. Every system administrator has received a request along the lines of, “I need to install XYZ software right now!” or “I am using VPN from home, and I need to add a local printer.” One way to get around this (not always the best way) is to grant the user temporary local administrator rights. To find out how to do this the Windows PowerShell way, I headed over to the TechNet ScriptCenter Repository to search for some examples.

Using the Categories listing in the left pane, I drilled down into Local Account Management, and I immediately saw what I was looking for: Local User Management Module. I clicked through to the details page and looked through the included functions. I was in luck! By using ADSI code in the Set-LocalGroup function, I morphed it into my own function as shown here.

 Function Add-LocalAdmin {
  [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null
  $Input = [Microsoft.VisualBasic.Interaction]::InputBox("Enter a username to add (Domain\Username)", "Add Local Admin", "")
  $Group = [ADSI]("WinNT://" + $ComputerName + "/Administrators, group")
  $Group.Add("WinNT://" +$Input.Replace(‘\’,’/’)
 }

The first line of the function loads the Visual Basic assembly; this allows me to create a custom input box, which prompts the user for the username to add to the local administrators group and then stores it in the $Input variable. The function then binds to the Administrators group on the remote computer using ADSI. Now we call the Add method of ADSI to add the desired user account to the administrators group. But there is a snag, the username is specified as Domain\Username, whereas ADSI requires a forward slash. To get around this, we use the Replace method to turn the back slash into a forward slash.

Arposh Windows System Administration tool


Whether you are new to Windows PowerShell and only need a simple script to get you going or you are a Windows PowerShell guru and need a nudge in the right direction, there are numerous resources for everyone. Thanks to all of these resources, I was able to take a simple GUI that I created for myself to make tasks easier and improve it enough to where the Windows PowerShell community would also find it useful. Without further ado, here is the Arposh Windows System Administration tool.

Image of Arposh

~Rich

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Choose a Specific Number of Random Letters

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use the Windows PowerShell Get-Random cmdlet to choose a specific number of random letters.

Microsoft Scripting Guy, Ed Wilson, is here. With the advent of the New Year, I am hard at work on the 2012 Scripting Games. This year there will be several improvements. I always learn from each year’s games, and I am a firm believer in continuous improvement. Right now, I am busy working on selecting the ten domains that I will emphasize in this year’s games. Two things will not change: the areas tested will relate to real world problems, and the beginner division will be for true beginners. Make no mistake about it—this is your chance to learn Windows PowerShell; to quote the title of my five-part webcast, “Learn it now—before it is an emergency!”

Creating random numbers

One of my favorite cmdlets is the Get-Random cmdlet; there are just so many times that I need to have a random number. Now I want to combine two things—letters and numbers. By using a trick from my recent blog, Use PowerShell and ASCII to Create Folders with Letters, I can create random letters.

There are two main ways that I use the Get-Random cmdlet. The first is to call the cmdlet and to let things fly. This is the default way of using the cmdlet, and it can be helpful in certain circumstances. For me, however, I generally need to provide some limits on the numbers that are returned by the Get-Random cmdlet. In these cases, I use the Minimum and the Maximum parameters to specify the range of numbers that are available to select. These two methods of utilizing the Get-Random cmdlet are shown here with the associated output from those two commands.

PS C:\> Get-Random

465457929

PS C:\> Get-Random -Minimum 1 -Maximum 10

3

When I was choosing the daily prize winners during the 2011 Scripting Games, I needed to have more than one random number selected, and I needed to specify the range of input. This is when I started experimenting with the InputObject parameter for the cmdlet. This second parameter set for the cmdlet combines a Count parameter with the InputObject. The InputObject parameter accepts an array of objects (and integers are objects) for input. Sweet! I now get to combine one of my favorite cmdlets with one of my favorite Windows PowerShell tricks (the range operator). This means that I can select three random numbers from an array of numbers that range from 1 to 100. The command to choose three random numbers from the numbers ranging from 1 to 100 is shown here.

Get-Random -InputObject (1..100) -Count 3

The big trick in the previous command is to use the parentheses to force the creation of the array prior to the selection of the three random numbers. The command that chooses three random numbers along with the associated output is shown in the image that follows.

Image of command output

Combining random numbers and ASCII values

I want to be able to select random two letter combinations (it could be 1 or 100 random letters; it really does not matter). To do this, I want to use the way cool Get-Random cmdlet. The secret? Use the ASCII character values. In the previously mentioned blog, I needed to be able to increment letter values so I could add letters to automatically created folders, and I pointed out how to use the ASCII character values (numbers) and convert them to letters by using the [char] type accelerator.

Instead of incrementing numbers in a certain range, I can just as easily randomly select the numbers. The ASCII values in the range of 65–90 are the capital letters A–Z. ASCII values 97–122 are the lower case letters a–z. To choose two random letters that will translate to ASCII capital letters in the range A–Z, I use the Get-Random cmdlet, specify a Count of 2, and provide a range operator to the InputObject parameter. The code to accomplish this is shown here, along with the associated output.

PS C:\> Get-Random -Count 2 -InputObject (65..90)

80

89

To convert the two randomly selected numbers to letters, I use the Foreach-Object cmdlet, and inside the script block, I use the [char] class to make the conversion. This command is shown here (I use the % alias for the Foreach-Object cmdlet).

Get-Random -Count 2 -InputObject (65..90) | % {[char]$_}

The preceding commands, along the output associated with those commands are shown in the image that follows.

Image of command output

Use Begin, Process, and End to collect and display

So I am half-way there. I can get random letters, but the letters are coming one at a time…and I need to create random letter combinations of varying lengths. To do this, I am going to use the Begin, Process, and End parameters of the Foreach-Object cmdlet. I create a variable ($aa) and set its initial value to $null. I create and initialize the $aa variable in the Begin block, and the Begin block runs once at the beginning of the command. I now add the code that uses [char] to convert the random numbers into ASCII letters. In addition, I use the += operator to store each letter back to the $aa variable.

I place this code in the Process block where it will operate once for each item that comes across the pipeline (in this example, it runs twice because I am only choosing to generate two random numbers). If I left the code with only a Begin and a Process parameter, I would need to manually retrieve the value of the $aa variable. Here is the code as it currently stands:

Get-Random -Count 2 -InputObject (65..90) | % -begin {$aa=$null} -process {$aa += [char]$_}

I do not want to manually retrieve the value of $aa. Instead, I want to automatically display the value that is contained in the variable. To do this, I add the End parameter. Inside the script block that is associated with the End parameter, I display the contents of the $aa variable. Here is the revised code with the added End parameter:

Get-Random -Count 2 -InputObject (65..90) | % -begin {$aa=$null} -process {$aa += [char]$_} -end {$aa}

WooHoo! This is cool. I can add this code, and use it to create a sort of guess-the-letter game. I already wrote the engine to do this when I wrote the Windows PowerShell cmdlet quiz.

The command that chooses two random numbers converts the numbers to their ASCII value and displays the results that are shown in the image that follows.

Image of command output

That is about all there is to choosing a couple of random numbers and using their ASCII value to convert them to letters. Join me tomorrow for more cool stuff using Windows PowerShell. It will be fun.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Print Output Without Installing Print Drivers

$
0
0

Summary: Use Windows PowerShell to print output from commands without installing print drivers.

Microsoft Scripting Guy, Ed Wilson, is here. If I have not said it recently, the Scripting Wife is cool! She is also a really good sport. Why do I say this? We are stuffed into the very back of a large airplane as we fly back from a Windows PowerShell User Group meeting. I did not get upgraded for this trip. While I love speaking in person at Windows PowerShell User Group meetings, flying across the United States nowadays takes a lot of time—it is therefore important for me to be able to maximize the travel time.

I publish a new Hey, Scripting Guy! blog every day, and that means if I take a day off from writing, I am a day behind. So, I constantly have my laptop with me, and I write wherever I am. So what does this have to do with the Scripting Wife? Well for one thing, she maintains my schedule. If you email me at scripter@microsoft.com and ask me to speak at your Windows PowerShell User Group, I will call up the Scripting Wife, and ask her to schedule it. She also makes all the travel arrangements.

Now, to the case in point…

We are stuffed in the back of a very large, completely full, totally cramped airplane. She saw that this was going to be the case as she monitored the flight during the week, and when she checked us in, she made certain she was sitting in front of me. “Huh?” you might ask. “You mean she did not want to sit with you?” Well, of course she did. She would most assuredly rather sit beside me, than beside the large, heat-radiating person who is currently drooling on her shoulder.

But she realized that if I was to have any hope at all of being able to use my laptop during the flight to write today’s Hey, Scripting Guy! blog, she would need to protect me from some fat dude laying in my lap and crushing my display when he reclines the chair. So she is sitting bolt upright on our flight to Chicago so I can work. Well done Scripting Wife! Today’s blog is dedicated to you!

Not every printer is a physical device

I am listening to the Rolling Stones on my Zune HD via my noise canceling headphones, trying to avoid a debilitating injury to my elbow as I type and dodge the flight attendants’ metal beverage cart that runs up and down the middle isle more often than the elevator at the Empire State Building in New York City. My laptop annoyed me for the last time this week, and so I F-Disked it and reinstalled Windows 7 Ultimate on it. Then we packed up and headed out town for a Windows PowerShell User Group meeting—so I have little more than Office 2010, all security updates, and my restored essential data (my ScriptingGuys folder and my Windows PowerShell modules folder). I did not even have time to install any printers. The image that follows displays my Printers folder.

Image of folder

The SnagIt printer installs with my screen capture software. If I am not certain of the printer names (I do not always trust things I see in graphical dialog boxes), I can use a quick WMI query to return the printer names. The command to obtain printer names from all my printers is shown here, along with the output associated with that command (in the command that follows, I use gwmi as an alias for Get-WmiObject, and select as an alias for Select-Object).

PS C:\> gwmi win32_printer | select name

 

name

----

SnagIt 8

Send To OneNote 2010

Microsoft XPS Document Writer

Fax

Use Windows PowerShell to print

I am whisking along at 35,000 feet, now listening to Cream on my Zune, and I decide I want to print. Well, besides the fact that I do not even have any real printers installed, and the fact that I do not have wireless access on the plane, I should mention that I shut my systems down at home before we left. So what can I do?

I can use the Out-Print Windows PowerShell cmdlet—it accepts piped input. “Say what?” you may ask. “You don’t have any real printers installed, and you couldn’t get to them even if you did.”

Did you notice the Microsoft XPS Document Writer? Microsoft XPS documents are portable documents, and I can create them by using the Out-Print cmdlet because the Microsoft XPS Document Writer acts like a real printer. In the command that follows, I use the Get-ChildItem (dir is an alias) to obtain a directory listing of the C:\fso folder. Instead of displaying the results directly to the Windows PowerShell console, I write it to an XPS document.

dir c:\fso | Out-Printer -Name "microsoft xps document writer"

When the command runs, a dialog box appears. The image that follows displays this dialog box.

Image of dialog box

I use the XPS Viewer to examine the output I wrote to the file. The image that follows shows the directory listing in the XPS Viewer.

Image of directory

If I need to do a lot of printing, I get tired of typing “Microsoft XPS Document Writer” all of the time. No problem; I can use Windows PowerShell and WMI to get the printer name, and to store it in a variable. Now I can use that value when I print. In the commands that follow, I use the Get-WmiObject cmdlet (gwmi is the alias). I use the Where-Object cmdlet to filter out the name that matches XPS. I store the returned WMI object in a variable, and I select the Name property with the Out-Printer cmdlet to print out process information (gps is alias for Get-Process).

$ptr = gwmi win32_printer | ? { $_.name -match 'xps'}

gps | Out-Printer $ptr.name

This technique works great when I need to document something—I simply pipe to the XPS printer. I can then email it to another machine, store the files, or collect them and send them to a remote printer all at once.

Well, that is about all there is to printing without having a real printer installed on the computer. Join me tomorrow when I begin a new week on the Hey, Scripting Guy! Blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Increase Your Productivity by Using the PowerShell Pipeline

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, teaches you how to use the Windows PowerShell pipeline to increase your productivity and to avoid writing scripts.

Hey, Scripting Guy! Question Hey, Scripting Guy! I keep hearing about the pipeline in Windows PowerShell, but I do not get it. I see commands, but I am not sure what they are actually doing. In addition, why do I need to use a pipeline in the first place? Surely, I can store stuff in variables, and use the ForEach command to walk through collections. I just don’t get it. I guess you can tell that I am an old VBScripter, and I guess old habits are hard to break. But really, what’s up with the pipeline?

—DG

Hey, Scripting Guy! Answer Hello DG,

Microsoft Scripting Guy, Ed Wilson, is here. Well, this new year is already shaping up to be an exciting one. The Scripting Wife and I are hard at work on a new series of Scripting Wife blogs in preparation for the 2012 Scripting Games. This year’s games will be the biggest and the best ever. I have been talking to various presidents of Windows PowerShell Users Groups from around the world, and I plan to work with them to help them get their groups up to speed for this year’s games. If you are not a member of a Windows PowerShell user group, check the Windows PowerShell Group website to see if there is one near you. If there is not one, and if you would like to start one where you live, contact me via the scripter@microsoft.com email address, and I will help point you in the right direction. 

Anyway, DG, you are not the first person to ask me about pipelines in Windows PowerShell. For people who approach Windows PowerShell from a strictly VBScript background, or even from a strict Windows background, the idea of a pipeline is somewhat foreign. At times, the process seems to work seamlessly, and at other times, it does not work at all. In some cases, the command appears to make sense, but in other cases, it does not make sense. And often there is neither rhyme, nor is there reason to the syntax and the commands themselves.

A basic example of a pipeline

A good way to see a pipeline in action is to create an instance of the Notepad process, retrieve the newly created process, and stop that process. To start a new instance of the Notepad process, I use the Start-Process cmdlet. I then use Get-Process to retrieve the process object, and I pipe the object to the Stop-Process cmdlet.

Start-Process notepad

Get-Process notepad | Stop-Process

What happens under the covers is that the process object that is retrieved by the Get-Process cmdlet, is sent to the InputObject parameter of the Stop-Process cmdlet. The Stop-Process cmdlet then stops each process contained in the process object that comes across the pipeline. If we did not have the pipeline in Windows PowerShell, I would perform this operation, exactly the same as I did in the VBScript days…I would store the objects in a variable, and pass the variable to the InputObject parameter of the Stop-Process cmdlet. This technique is shown here:

Start-Process notepad

$notepad = Get-Process notepad

Stop-Process -InputObject $notepad

Examining the pipeline mechanics

I can confirm that the process object passes to the InputObject parameter by using the Trace-Command cmdlet.

Note: Microsoft MVP and Honorary Scripting Guy, Don Jones, reminded me about using this cmdlet in his great column, PowerShell with a Purpose. (I had not played with it much since I wrote my book, Windows PowerShell 2.0 Best Practices).

To use the Trace-Command cmdlet to display ParameterBinding information, I specify ParameterBinding to the Name property—that is the name of the command that I want to trace. I use the PSHost switched parameter to tell the cmdlet to monitor the Windows PowerShell host, and I then specify the Expression I want to trace. The Expression goes into a script block (delimited by a pair of curly brackets). The complete command is shown here.

Trace-Command -Name parameterbinding -PSHost -Expression {get-process notepad | stop-process}

The command to trace the parameter binding of the Get-Process command as it is piped to the Stop-Process cmdlet, along with the output associated with this command is shown in the image that follows.

 Image of command output

Using different cmdlets in the pipeline

A number of cmdlets typically combine with other cmdlets to assist in working with objects on the pipeline. These cmdlets accept input via the pipeline, and then they perform services such as grouping, sorting, or filtering the information that comes through the pipeline, before they pass the input to other cmdlets. These cmdlets are:

  • Get-Unique
  • Group-Object
  • Select-Object
  • Sort-Object
  • Tee-Object
  • Where-Object

One of the most important of these pipeline cmdlets is the Where-Object cmdlet. The reason the Where-Object cmdlet is so important is that it filters out the input it receives. It is quite common for a cmdlet to produce a lot of information. For example, the Get-Process cmdlet returns an awful lot of data, only a small portion of which appears in the default display. If I am interested in obtaining information about only processes named svchost on my computer, I could use the Where-Object cmdlet to filter out all of the process objects that did not have a name of svchost.

In the command that follows, I first retrieve a collection of process objects by using the Get-Process cmdlet (one for each process that runs on the computer). I then send these process objects one at a time across the pipeline. When the process arrives on the other side of the pipeline, the Where-Object cmdlet examines each process.

While the Where-Object cmdlet is looking at each process that comes across the pipeline, it uses the $_ automatic variable to represent the current object in the pipeline. Therefore, I use the $_ variable to represent the current object in the pipeline, and I choose to examine the Name property of that object. If the name of the object (represented by the $_ automatic variable) is equal to svchost, the object passes the filter. The code that filters the svchost processes is shown here.

Get-Process | Where-Object { $_.name -eq 'svchost'}

There is nothing wrong with using the Where-Object to limit the return of process information to the svchost process. But, in this exact example, the Get-Process cmdlet happens to have a Name parameter that will filter out processes based upon a name. The difference is that when using the Where-Object cmdlet, Get-Process returns information about every process on the system. Then all the process information enters the pipeline, and the Where-Object cmdlet performs the filtering. This is inefficient, and when working against a remote system, it can cause a significant performance hit on the command. The command to return process objects that are named svchost is shown here.

Get-Process -Name svchost

If I am curious about the difference in performance between using the Where-Object or using the Name parameter, I can use the Measure-Command cmdlet to determine which command is fastest. The syntax to measure the two commands is shown here.

measure-command { Get-Process | Where-Object { $_.name -eq 'svchost'} }

measure-command { Get-Process -Name svchost }

The image that follows illustrates using the Measure-command cmdlet and the associated output from those commands.

 Image of command output

On my system, the command using the Where-Object cmdlet takes 24 milliseconds, and the command that does not use the pipeline takes 1 millisecond. The Measure-Command cmdlet is not accurate with subsecond measurements, and in reality, there is no difference between 1 millisecond and 24 milliseconds. If using one command instead of the other command causes you to pause and consider the syntax, your performance gain is lost. However, if you are querying for svchost information from 1000 servers that are distributed across the network, the differences are likely to become pronounced, and there could be a larger time difference between the two commands.

DG, that is all there is to using the pipeline and filtering out data. Pipeline Week will continue tomorrow when I will talk about sorting objects.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Order Your Output by Easily Sorting Objects in PowerShell

$
0
0

Summary: Much of the time, there is no guarantee to the order in which Windows PowerShell returns objects. This blog explains how to fix that issue.

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have a problem with Windows PowerShell. It seems that it deliberately randomizes the output. I mean, there seems to be no rhyme or reason to the way that information is returned from a cmdlet. Am I alone in this frustration, or is there some secret sauce that I am missing? Windows PowerShell is cool, but if I always have to put data into an Excel spreadsheet just to sort it, then it is not much better than VBScript in my mind. Help me, oh fount of Windows PowerShell wisdom.

—CD

Hey, Scripting Guy! AnswerHello CD,

Microsoft Scripting Guy, Ed Wilson, is here. The other day, the Scripting Wife and I were at the first ever Windows PowerShell User Group meeting in Charlotte, North Carolina. It was really cool. We love being able to interact with people who love Windows PowerShell as much as we do. Next month, we are having a script-club type of meeting; we encourage people to show up with the Windows PowerShell scripts they are working on, so it will be a show-and-tell type of meeting.

Use Sort-Object to organize output

Anyway, after the user group meeting, when we were all standing around, one of the attendees came up to me and asked me in what order Windows PowerShell returns information. The answer is that there is no guarantee of return order in most cases. The secret sauce is to use the built-in sorting mechanism from Windows PowerShell itself. In the image that follows, the results from the Get-Process cmdlet appear to sort on the ProcessName property.

Image of command output

One could make a good argument that the processes should sort on the process ID (PID) or on the amount of CPU time consumed, or on the amount of memory utilized. In fact, it is entirely possible that for each property supplied by the Process object, someone has a good argument for sorting on that particular property. Luckily, custom sorting is easy to accomplish in Windows PowerShell. To sort returned objects in Windows PowerShell, pipe the output from one cmdlet to the Sort-Object cmdlet. This technique is shown here where the Sort-Object cmdlet sorts the Process objects that are returned by the Get-Process cmdlet.

Get-Process | Sort-Object id

The command to sort the Process objects on the ID property and the output associated with that command are shown in the image that follows.

Image of command output

Reversing the sort order

By default, the Sort-Object cmdlet performs an ascending sort—the numbers range from small to large. To perform a descending sort requires utilizing the Descending switch.

Note:  There is no Ascending switch for the Sort-Object cmdlet because that is the default behavior.

To arrange the output from the Get-Process cmdlet such that the Process objects appear from largest process ID to the smallest (the smallest PID is always 0—the Idle process), choose the ID property to sort on, and use the Descending switch as shown here:

Get-Process | Sort-Object id –Descending

The command to perform a descending sort of processes based on the process ID, and the output associated with that command are shown in the image that follows.

Image of command output

When you use the Sort-Object cmdlet to sort output, keep in mind that the first position argument is the property or properties upon which to sort. Because Property is the default means that using the name Property in the command is optional. Therefore, the following commands are equivalent:

Get-Process | Sort-Object id –Descending

Get-Process | Sort-Object -property id –Descending

In addition to using the default first position for the Property argument, the Sort-Object cmdlet is aliased by sort. By using gps as an alias for the Get-Process cmdlet, sort as an alias for Sort-Object, and a partial parameter of des for Descending, the syntax of the command is very short. This short version of the command is shown here.

gps | sort id –des

Sorting multiple properties at once

The Property parameter of the Sort-Object cmdlet accepts an array (more than one) of properties upon which to sort. This means that I can sort on the process name, and then sort on the working set of memory that is utilized by each process (for example). When supplying multiple property names, the first property sorts, then the second property sorts.

The resulting output may not always meet expectations, and therefore, may require a bit of experimentation. For example, the command that follows sorts the process names in a descending order. When that sort completes, the command does an additional sort on the WorkingSet (ws is the alias) property. However, this second sort is only useful when there happen to be multiple processes with the same name (such as the svchost process). The command that is shown here is an example of sorting on multiple properties.

Get-Process | Sort-Object -Property name, ws –Descending

The figure that is shown here illustrates the output from the command to sort Process objects based on name and ws properties.

Image of command output

When the name and ws properties reverse order in the command, the resulting output is not very useful because the only sorting of the name property happens when multiple processes have an identical working set of memory. The command that is shown here reverses the order of the WorkingSet and the process name properties.

Get-Process | Sort-Object -Property ws, name –Descending

The output that is shown here shows that there is very little grouping of process names. In this example, adding the name property does not add much value to the command.

Image of command output

Sorting and returning unique items

At times, I might want to see how many different processes are running on a system. To do this, I can filter duplicate process names by using the Unique switch. To count the number of unique processes that are running on a system, I pipe the results from the Sort-Object cmdlet to the Measure-Object cmdlet. This command is shown here.

Get-Process | Sort-Object -Property name -Descending -Unique | measure-object

To obtain a baseline that enables me to determine the number of duplicate processes, I drop the Unique switch. This command is shown here.

Get-Process | Sort-Object -Property name -Descending | measure-object

Performing a case sensitive sort

One last thing to discuss when sorting items is the CaseSensitive switch. When used, the CaseSensitive switch sorts lowercase letters first, then uppercase. The following commands illustrate this.

$a = "Alpha","alpha","bravo","Bravo","Charlie","charlie","delta","Delta"

$a | Sort-Object –CaseSensitive

When the two previous commands run, the output places the lowercase version of the word prior to the uppercase version. This output appears in the figure that follows.

Image of command output

CD, that is all there is to sorting with Windows PowerShell. Pipeline Week will continue tomorrow when I will talk about grouping things with Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use the PowerShell Group-Object Cmdlet to Display Data

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, teaches how to use the Windows PowerShell Group-Object cmdlet to organize data.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have been using Windows PowerShell more these days. I find it really easy to use, and I like the way I can find things. But what I need is a better way to view things. For example, I have been saving stuff as a CSV file, and then opening the data in Microsoft Excel. This works OK, but I would like to be able to avoid the middleman so to speak. In other words, I want to be able to group information so it is easier to read. Can you think of a shortcut for grouping information?

—MS

Hey, Scripting Guy! Answer Hello MS,

Microsoft Scripting Guy, Ed Wilson, is here. MS, I am glad that you are using Windows PowerShell more on a daily basis. In fact, learning Windows PowerShell makes a great New Year’s resolution, which is something it seems that some people are actually doing, based on tweets coming across on Twitter. In a couple of weeks, Tim Bolton writes a guest blog that talks about why it is good to learn Windows PowerShell, and in his initial email to me, he stated that this was something he wrestled with for quite some time.

One of the things about learning to use Windows PowerShell is that Windows PowerShell can slice-and-dice data so easily. It becomes a quick data analysis tool that allows network administrators, analysts, and others to parse data to quickly discover and remediate issues. It can also be used to audit baseline information, or even to spelunk through reams of statistical data.

One cmdlet that allows this analysis is the Group-Object cmdlet. In its most basic form, the Group-Object cmdlet accepts a property from objects in a pipeline, and it gathers up groups that match that property and displays the results. For example, to check on the status of services on a system, pipe the results from the Get-Service cmdlet to the Group-Object cmdlet and use the Status property. The command is shown here.

Get-Service | Group-Object -Property status

The command to group services based on the status of the service, along with its resultant output is shown in the following image.

Image of command output

In the Group field of the output from the Group-Object cmdlet, the objects that are grouped appear. The output indicates that each grouped object is an instance of a ServiceController object. This output is a bit distracting.

In situations, where the grouping is simple, the Group output might actually be useful. An example of this is the grouping of numbers, as shown in the image that follows. (Group is an alias for Group-Object).

Image of command output

If the grouping information does not add any value, omit it by using the NoElement switched parameter. The revised command to display the status of services and the associated output are shown in the image that follows.

Image of command output

One of the cool things to do with the Group-Object cmdlet is to use it to return a hash table of information. I have written extensively about hash tables in the past on the Hey, Scripting Guy! Blog; and in fact, I even wrote an entire week of blogs that talked about the basics of using hash tables.

Here are the steps for using the Group-Object cmdlet to return a hash table of information:

  1. Pipe the objects to the Group-Object cmdlet.
  2. Use the AsHashTable switched parameter and the AsString switched parameter.
  3. Store the resulting hash table in a variable.

An example of using these steps is shown in the code that follows.

$hash = Get-Service | group status -AsHashTable –AsString

After it is created, view the hash table by displaying the content that is stored in the variable. This technique is shown here.

PS C:\> $hash

 

Name                           Value

----                           -----

Running                        {System.ServiceProcess.ServiceController, System.S...

Stopped                        {System.ServiceProcess.ServiceController, System.S...

At this point, the output does not appear to be more interesting than a simple grouping. But, the real power appears when accessing the key properties (those stored under the Name column). To access the objects stored in each of the key values, use dotted notation, as shown here.

$hash.running

The command to create the hash table of service information and to access the running services by using dotted notation are shown in the image that follows.

Image of command output

I can index into the collection by using square brackets and selecting a specific index number. This technique is shown here.

PS C:\> $hash.running[5]

 

Status   Name               DisplayName

------   ----               -----------

Running  AudioEndpointBu... Windows Audio Endpoint Builder

If I am interested in a particular running service, I can pipe the results to the Where-Object cmdlet (the question mark is an alias for Where-Object). This technique is shown here.

PS C:\> $hash.running | ? {$_.name -match "bfe"}

 

Status   Name               DisplayName

------   ----               -----------

Running  BFE                Base Filtering Engine

In addition to being able to group directly by a property, such as running services, it is also possible to group based on a script block. The script block becomes sort of a where clause. To find the number of services that are running, and support a stop command, use the Group-Object cmdlet and a script block. This command is shown here.

PS C:\> Get-Service | group {$_.status -eq "running" -AND $_.canstop}

 

Count Name                      Group

----- ----                      -----

   61 True                      {System.ServiceProcess.ServiceController, System....

  115 False                     {System.ServiceProcess.ServiceController, System....

 

MS, that is all there is to using the Group-Object cmdlet to group data. I invite you to join me tomorrow for more Windows PowerShell goodness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Troubleshoot Exchange Server Public Folders

$
0
0

Summary: Microsoft PFE, Seth Brandes, discusses using Windows PowerShell to troubleshoot a customer problem with Exchange Server public folders.

Microsoft Scripting Guy, Ed Wilson, is here. Today we are happy to have Seth Brandes as our guest blogger.

Seth Brandes is a premier field engineer who works in Microsoft Services. He began his foray into programming as a young lad punching the keys in BASIC on a Tandy TRS-80 back in the late 80s, and he gradually progressed through Pascal, VisualBasic, and C/C++/C#. He has most recently latched on to Windows PowerShell v2. He currently specializes in the Microsoft Exchange Server platform, providing Microsoft premier customers in the midwest guidance and support. He is currently gearing up his blog site and hopes to have it online in the near future.

Take it away Seth…

I was attending an internal Microsoft event a couple months ago, and one of my peers introduced me to Ed Wilson, THE Scripting Guy. Yes, I did get an autograph! He extended an invitation to me to write a guest blog illustrating an Windows PowerShell example related to Exchange Server, and needless to say, I could not refuse.

Recently I had an interesting inquiry posed by one of my customers regarding an apparent inconsistency with the Get-PublicFolderStatistics cmdlet in their Exchange Servedr 2010 environment. They had users calling their Help Desk complaining that certain public folders appeared empty of all content. Running the Get-PublicFolderStatistics cmdlet against the folders in the Exchange Management Shell returned non-zero metrics as if there was content present! I directed them to run the cmdlet with the -server switch against each server that contained replicas of the affected folder. They found discrepancies between the replicas because some were lacking content. We tied it back to a 2003 to 2010 routing group connector outage when they were migrating public folders. They updated the replicas and were placated. Great. Problem solved. Or so I thought… 

Shortly after we wrapped that up, they had recurrences of the issue in several other folders. Therefore, they wanted a repeatable method that they could easily invoke to determine which folders had “mismatched” replica content in their hierarchy. I promptly informed them they already had a great monitoring system, the best ever designed in fact: the end user. 

That didn’t go over so well; so, I thought this would be a great opportunity to create a simple little script sample in Windows PowerShell that can display replica metrics for any folder (or folder tree) to determine if any replicas have mismatched metrics. Note that any high-volume folder will likely have slightly off numbers when comparing replicas because source data needs a little time to get to replica copies—even under the best of circumstances. This script should be a piece of cake. 

I didn’t realize at the onset just how big the cake was going to be and what flavor. But I managed to cobble together a working script sample. Is it the best approach ever? Don’t know, probably not. Did I learn some things I didn’t know before? Sure. Is it a decent enough script to point out and review some cool Windows PowerShell concepts? I would say, “Yes.” Therefore, I decided to write this blog.

Because I had a preconceived picture about how I wanted the output to look, I started by sketching a visual output of the content on the whiteboard, thinking to myself that this should be doable. I envisioned a table with each row containing the folder and the item count of each replica, where the server names of the replicas would be the columns. Easy right? Let us get a mockup of how this should look according to my imagination. This mockup is shown in the following table.

Folder Path      Replica1  Replica2   ReplicaX

-----------      --------  --------   --------

\Level1\Sub1               2      2

\Level1\Sub1\SubSub1           3      3

\Level2           68

Not too shabby. It’s pretty much a formatted table output. Let us get that filed away—check.

Next, I wrote a brief outline of the overall flow:

  1. Obtain folder as input so I know what to look for.
    Wait…do I want to get all fancy and do recursion to look at a portion of the tree hierarchy too?  Umm...sure, but let’s leave it up to user to decide.
  2. Obtain input as whether or not to recurse. I mean, really, who doesn’t curse, recurse, and recurse again public folders? Seriously.
  3. Grab the properties of the folder(s) in the folder list.
  4. Because we are interested in only a couple of folder properties, we’ll specifically target the folder path, a unique identifier of the folder, and the list of replicas.
  5. For each replica, call the Get-PublicFolderStatistics cmdlet to obtain the item count metric…

DOH! This won’t work for replicas that are homed on Exchange Server 2003. What to do, what to do? The usual suspects for getting at 2003 are DAV, CDO, or WMI. I don’t really know DAV or CDO programming, and I know there are built in WMI-based cmdlets in Windows PowerShell, so I’ll give WMI a whirl. Let’s rewrite the logic for step 5:

5. For each replica, determine if is being hosted on a server running Exchange Server 2003 or on a server running Exchange Server 2007 or Exchange Server 2010. If it is 2003, make a WMI query via the Get-WMIObject cmdlet to the server to obtain the item count, else use the Get-PublicFolderStatistics cmdlet to obtain the item count. Store the results somewhere. Where? How? Why? Patience my young Padawan, patience. We’ll get to it.
6. Rinse and repeat for each folder (ala steps 4-5).
7. When all the information has been gathered and stored in a meaningful place, spit out the output as described earlier.
8. Exit chair, open fridge, grab a cold drink.

 Eight steps ain’t so bad—especially step 8! Let’s get coding!!! Here is the opening to the script:

#get our input variables from the command line

Param(

  #full PF path

  [Parameter(Mandatory=$true)]

  [ValidateNOtNullOrEmpty()]

  [string] $Path,

  #do we recurse?

  [switch] $Recurse

  )

We start by defining the input parameters of the script. The first is the folder path that will be stored as a string, and because we can’t do anything without it, we throw in a parameter on the parameter (er, pun intended?) to make it mandatory, along with a function to ensure it is not a null value. The second is the recurse switch, which when defined with the type [switch], does not include a value; rather its presence alone is enough to use in a Boolean manner. So far, so good. Now add a bit of logic…

if($recurse) #recurse was requested, get the folder and all of its descendants

  {

  $pfCollection = Get-PublicFolder $Path -ErrorAction silentlycontinue -recurse | Select identity,replicas,@{Name="LegPFDN"; Expression = {$_.identity.legacyDistinguishedName}}

  }

else #recurse was not requested, get only the folder defined

  {

  $pfCollection = Get-PublicFolder $Path -ErrorAction silentlycontinue | Select identity,replicas,@{Name="LegPFDN"; Expression = {$_.identity.legacyDistinguishedName}}

  }

if(!$pfcollection) #validate a folder path was actually found, if not exit the script

  {

  Write-Host "ERROR! Could not find folder path: $path"

  Write-Host "Please use a valid folder path."

  exit

  } 

In the previous code, we are doing three things:

  1. Validating whether or not the user specified to use recursion.
  2. Grabbing the properties that interest us about the folder(s) in question via the Get-PublicFolder cmdlet.
  3. Validating that we actually found at least the root folder specified—else, what’s the point right? Note that we are suppressing error output because we don’t care if the command fails to find a folder. We do our own validation after the fact, and keep the output nice and clean.

Note that we are grabbing three properties specifically:

  1. Identity. This is the full name of the folder including the folder path. This is useful for our final output later. Stay tuned.
  2. Replicas. This is a collection of one or more replicas, which we will use shortly to figure out which server they are on to use in querying for its replica’s item count stats/metrics. Again, stay tuned.
  3. LegPFDN. Wha??? Huh?

That’s not a property of Get-PublicFolder! Here we are actually grabbing the LegacyDistinguishedName of the public folder, but because that’s a lot to type (and more importantly, I’m lazy), I elected to “rename” it to LegPFDN by using a simple hash table definition to assign a custom “name” for our purposes. I love this technique because it can be used in so many places and the expression can include calculated values. So cool!

Anywho…it’s important to note that the results will include the property name of LegPFDN and not LegacyDistinguishedName. Got it? Good. I know your burning to know why this specific property—read on and find out my friend, read on…

Before we continue with the code, let’s take a quick timeout to explain my chosen data storage methodology. I burned many a brain cell trying to figure out a cool, but useful, way to store the results in a manner that would be easy to manipulate the output that I envisioned and described up above. I decided to store all the replica item counts of a given folder into a custom object, aka record. Each “folder” record would contain the following properties:

  • Folder Path, which contains the identity of the folder, including its full path.
  • A separate property for each replica of the folder with the name of the replica’s server as the property name and its value, which contains the item count.

Each of these records are appended as they are completed to a master array called $pfResults. This manner of storing the data provides a very simple method to display the data the way I want to. However, output of the content completely falls apart when records in the set contain dissimilar property names.  

This little nugget drove me crazy until I came up with a way around it. My ego tells me it’s good enough, but my common sense is still telling me that  it’s a hatchet job at best.  Oh well. It works. For now. To illustrate what the heck I’m alluding to here, let’s paint a picture. Here’s what the script actually stores as output by Format-List  against the $pfResults array:

Folder Path   : \Level1\Sub1

Replica2 : 2

ReplicaX : 2

Folder Path   : \Level1\Sub1\SubSub1

Replica2 : 3

ReplicaX : 3

Folder Path   : \Level1\Sub2\

Replica1 : 68

When you attempt to view this output with Format-Table, the third item’s Replica information will not be written out, and it will look like this instead:

Folder Path      Replica2   ReplicaX

-----------      --------   --------

\Level1\Sub1         2      2

\Level1\Sub1\SubSub1     3      3

\Level2

The reason is when the first element in the array is used for its defining features. In this case, the first record in the array does not contain any properties named Replica1, so that never makes it to the table formatting “definition.”

For my purposes, this is the absolute worst. Instead of just throwing it away, I was determined (my better half would use the word “stubborn” here) to find a way around this. So, in came the hatchet job. I created a one-off custom object, and as I parse through all the replicas of each folder (or just the single folder if not using recursion), I simply add the server name to the object—but not any duplicates.

Why am I doing this? Because if I inject this custom record into the very first position in the array, it will be used for the Format-Table output definition! This will force all the columns to be present. Sweet! Is it a hatchet job? Certainly. So Whut?!

Back to the code…

#define the array containing our overall folder(s) and metrics results

$pfResults = @()

#Define a custom object which will store the names of all servers

#containing replicas from (all) the folder(s).

$ReplicasObj = new-object system.object

Add-Member -InputObject $ReplicasObj -MemberType NoteProperty "Folder Path" ""

Define my master array that will hold the records. Also defined that one-off object I have named $ReplicasObj. Because this object will eventually be the first record in the array, I also throw in a property called “Folder Path” since it is critical to be defined for use as a column header.

#loop through each in folder in the hirarchy collection.

foreach ($pf in $pfCollection)

  {

There is an overall loop occurring for all the folders in the original collection. If recursion was not used, it will simply exit after processing the single folder. Neato.

#create temp object to store information about replica

  $tmpObj = New-Object system.object

  #add the name of the folder including the full path

  Add-Member -InputObject $tmpObj -MemberType NoteProperty "Folder Path" $pf.Identity

We create a new custom temporary object that will hold this iteration’s folder replicas info. Again, we are storing at least two or more properties: the folder identity and each replica’s information. First, we add the “Folder Path” in the form of the folder identity.

foreach ($replica in $pf.replicas) #step through each replica of the folder

    {

We then start a new loop inside the current loop to cycle through all the replicas of the folder and start processing the replica information.

#store replica's server name into variable for use below.

    $replicaServer = (Get-PublicFolderDatabase $replica.distinguishedname).server.name

            #add server to Replicas object to be used as the column headers for the $pfresults array

    Add-Member -InputObject $ReplicasObj -MemberType NoteProperty $replicaServer "" -ErrorAction SilentlyContinue

Get the replica’s server name and store it into the one-off object that will be used for the column headers in the first array element.

Note: To prevent getting duplicate server names into the record, I simply suppress the error handling because by default, you cannot have two properties with the same name in an object using the Add-Member cmdlet (it normally throws an error if a duplicate property is added to the record). If I wanted to allow duplicates (which I don’t, so this is just informational), I would use the -force switch as an override to create a duplicate property.

#determine which version of exchange the replica is homed on (as defined in KB158530)

   if ( (get-exchangeserver $replicaServer).admindisplayversion.major -lt 8)

      { #Exchange 2003 server

      #grab item count via WMI since get-publicfolderstatistics will not work against a 2003 server. Store result in the temp object.

      add-member -inputObject $tmpObj -membertype NoteProperty $replicaserver (Get-WmiObject -ComputerName $replicaServer -Namespace "root\MicrosoftExchangeV2" -Class "Exchange_PublicFolder" -Filter "targetaddress = ""$($pf.legPFDN)""").messagecount

      }

We are finally ready to gather the metrics for the current replica, but we need to invoke the proper command, so we query the Exchange Server properties that the replica is homed on to figure out what its major version is. Per KB158530, if the administrative version is less than 8, it is not going to be Exchange Server 2007 or Exchange Server 2010, so the Get-PublicFolderStatistics cmdlet goes out the window. I am making an assumption that Exchange Server 5.5 and Exchange Server 2000 are not in play, and I am therefore finding Exchange Server 2003.

As I mentioned earlier, because there are precanned cmdlets for WMI—and I happen to know a WMI call to query public folder information—I’m going with it! Here is where that pesky long named property of LegacyDistinguishedName (which we are calling LegPFDN) comes in. I use it as the filter criteria for the WMI call (and I’ll use it also for the 2007/2010 search below…stay tuned).

Why this property specifically? Er, we’re gonna go on a slight tangent here to explain…

<TANGENT>Basically, I ran into some logistical issues in the past when calling public folders via WMI where I wanted to perform built-in recursion, for example, with a filter using the Path attribute. Such a filter would be structured like “path>=’/level1/Sub1/’”. The problem is that if the name of another folder at the same level of “Sub1” is literally greater than from a comparison perspective (such as “Sub2”), Sub2’s folder (and all of its descendant folders) would get caught up and returned in the results. Not good.

So, at the time, I elected to use the LegacyDistinguishedName. Plus using the Path attribute requires further monkeying around because with WMI, the path is returned with forward slashes instead of backslashes, and so on, I’ve just found it kinda messy.</TANGENT>

Is there a better attribute to use for the folder identity that is searchable via WMI and Get-PublicFolderStatistics? I don’t know. Please tell me so I can repent and learn the error of my ways.

In any case, we store the results of the WMI call that contains the item count into a new property of the temporary object, and we name the property the server name that is hosting the replica.

else

      { #Exchange 2007 or 2010 server

      #grab item count via get-publicfolderstatistics since replica is on 2007 or 2010. Store result in the temp object.

      add-member -inputObject $tmpObj -membertype NoteProperty $replicaserver (Get-PublicFolderStatistics $pf.legPFDN -Server $replicaserver).itemcount

      } #end of the else

    } #end of the replica loop 

If the version dictates Exchange Server 2007 or Exchange Server 2010, we simply apply the exact same logic as previously, only this time we get the item count by using the very versatile Get-PublicFolderStatistics cmdlet. Rinse-and-repeat for any remaining replicas.

The end result is a record that contains the folder identity and all the replicas with their item counts all stored as properties. Yay!

  #dump the content of the temp object into the results array

  $pfResults += $tmpObj

  } #end of the folder loop 

Back in the main folder loop, we take the newly completed temporary object and append it to the master array that is holding our collection of objects. Double yay!

#prepend the ReplicasObject to the beginning of the array to allow a format-table to display all columns

$pfResults = @($replicasObj,$pfResults)

#output the results of the

$pfResults

We finally arrive at the tail end of the script. All of the objects have been loaded into the array, but if we attempt to display a table with the array elements, we’d start potentially missing entire chunks due to that pesky “first element properties defines the output definition” business we explained earlier. So, we simply inject that one-off object that contains all the properties we need containing all the server names. We also add the “Folder Path” entry, which will be used for all of our column headers. Nice. Then, we simply call the array, and it paints itself to the console in all its glory in a table format. 

Note: There may be a need to pipe it to Format-Table if there are enough columns in play or if you want to punch it through to an Export-CSV cmdlet (for example). But because this is only a sample script, I’m taking the easy way out. Customize it to your heart’s content…

To summarize…

In my opinion, we covered some pretty cool Windows PowerShell concepts:

  • Using hash tables to customize output
  • Using custom objects to store data
  • Using an array as a record set to store other data types
  • Manipulating output of an array that contains dissimilar objects
  • Determining the version of Exchange Server that is running
  • Illustrating different ways to access public folder statistics information, depending on the version of Exchange Server that is hosting the folder replica
  • Using different types of parameters for script input

 The complete script can be found in the Script Center Repository.

Seth~

Thanks, Seth, for sharing your knowledge and time. Join us tomorrow as Bhargav Shukla shares another blog about Exchange Server.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell and RBAC to Control Access to Exchange Server Cmdlets

$
0
0

Summary: Microsoft PFE, Bhargav Shukla, shows how to use Windows PowerShell and RBAC to control access to Exchange cmdlets.

Microsoft Scripting Guy, Ed Wilson, is here. We are joined today by guest blogger Bhargav Shukla.

Bhargav Shukla is a senior premier field engineer—unified communications, with his primary focus on the Exchange Server platform. Bhargav has been in IT since the beginning of his career 14 years ago. Before joining Microsoft, he managed to work on almost any technology an IT consultant would be required to know, including Active Directory, Exchange, RSA Security, VMware, Citrix, and Cisco. He also holds industry certifications such as Microsoft Certified Master: Exchange Server 2010, VMware Certified Professional, Citrix CCEA, RSA: CSE, and Cisco CCNA/CCDA. He started working on scripting with small DOS batch scripts in his early career, and he learned to be a better scripter with new scripting languages. From batch files to VBScript and on to Windows PowerShell, he has written many scripts to address specific needs and reusable functions for repetitive code. When he is not working with customers, Bhargav leads the Philadelphia Area Exchange Server User Group, shares his knowledge on his blog and twitter, plays chess, and flies model airplanes.

Bhargav's contact information:
Blog: Random thoughts of an Exchange PFE Blog
Twitter: bhargavs

RBAC and the principle of least privilege

The principle of least privilege is an important design consideration in enhancing the protection of data and functionality from unintentional and/or malicious behavior.

Exchange Server 2010 aids such implementation of roles by using role-based access control (RBAC). However, in my professional experience, I have noticed that many deployments are not actually thought out to utilize the full potential of what RBAC has to offer. Most often, I see deployments where built-in RBAC roles are utilized and rarely customized to match the actual job roles of administrators. This mostly results in having too much access for the role.

In this blog, I will try to explain a technique that will enable you to understand and apply the principle of least privilege with RBAC, especially when the administrators in question may not even be familiar with Exchange Server cmdlets, and you only want them to run a predefined sequence or a script that meets your requirements and aligns with business workflow.

Let’s take an example of a requirement to update the mailbox properties of users calling the Help Desk. You are required to provide tools so the Help Desk can carry out the necessary operations. You are also required to ensure that the business process workflow is followed. Giving access to appropriate cmdlets and parameters is relatively easy, but you don’t want to provide direct access to cmdlets because it may enable Help Desk users to run cmdlets outside of the confines of the workflow, and that could result in an undesired configuration of objects.

You decide to create a script that adheres to business process workflow. You do not want to distribute the script to Help Desk users because that can enable curious Help Desk users to edit the script as they desire.

By now, you may be asking, “How are we going to do this if we aren’t providing user access to cmdlets or script?” This is where the RBAC unscoped roles come in!

Let’s start with the business logic for a new user creation:

  • User must be created on given database.  For this exercise, we will use “Mailbox Database 0489489499.”
  • User must be created in a given organizational unit (OU). For this exercise, we will use “Fabrikam Users” OU.

Although this is very basic and simple logic, the process that we are going to follow can be applied to much more complex business cases, and this is merely an illustration of how you can do it.

The script looks like this:

## This script accepts a single parameter Name

param([string]$Name=$(Throw "Parameter missing: -name Name"))

$UPNSuffix = "@fabrikam.com"

$MDB = "Mailbox Database 0489489499"

$OU = "Fabrikam Users"

$Password = ConvertTo-SecureString 'MyPassword123' -AsPlainText -Force

New-Mailbox $Name -UserPrincipalName $Name$UPNSuffix -Database $MDB -Password $Password -OrganizationalUnit $OU

Let’s call the script New-CustomMailbox.ps1.

First step is to copy the script to all servers running Exchange Server 2010 that the Help Desk has the ability to connect to (usually all internal server’s roles except Edge). We will copy the script to the RemoteScripts folder (usually located at “C:\Program Files\Microsoft\Exchange Server\V14\RemoteScripts”).

Next, we need to assign “Organization Management” group ability to create the unscoped roles. This is a protective measure because creating unscoped roles could be potentially dangerous. Due to the potentially dangerous nature of unscoped roles by default, no one in a given Exchange Server 2010 organization is assigned this role. This can be verified by running:

Get-ManagementRoleAssignment *unscoped* | fl *

The output would result in exactly one entry that looks similar to the following:

            …..

Identity                     : UnScoped Role Management-Organization Management-Delegating

…..

RoleAssignmentDelegationType : DelegatingOrgWide

…..

…..

It is important to note RoleAssignmentDelegationType. It reads “DelegatingOrgWide” and not “Regular.” It is the “Regular” assignment that actually allows the assignee the ability to execute given cmdlets or parameters.

A simpler way to verify this is to try running the following:

New-ManagementRole –Un<TAB>

If you are familiar with Windows PowerShell syntax, this should result in auto completion of the parameter –UnScopedTopLevel…that is, if you have access to it. Until the administrator assigns this access (see the following script) or has assigned the access already, you will not be able to tab complete or run this cmdlet with the –UnScopedTopLevel parameter.

By running the following, we will assign “Organization Management” group ability to create unscoped roles:

New-ManagementRoleAssignment "Unscoped Role Management-Organization Management" -Role "Unscoped Role Management" -SecurityGroup "Organization Management"

Now that the ability to create unscoped roles has been assigned to the administrator, let’s create a new management role that Help Desk users will eventually use. You will need to establish a new Windows PowerShell session before doing this, because changes made by the cmdlet won’t be effective until then.

New-ManagementRole "Helpdesk Provisioning Script" –UnScopedTopLevel

This cmdlet will create an empty top-level unscoped role. It is important to understand a bit about top-level and unscoped. If you are aware of RBAC basics, you know that each new management role that you create must use a built-in role as a parent—except unscoped roles, which do not have any parent. Due to it’s nature as a top-level role, it doesn’t have any scope assigned either. You will need to manage the scope of impact from the script that you are going to assign to the role. You also need to control who has access to the server running Exchange Server 2010 and the script location to avoid unauthorized access and modifications.

Now that we have the unscoped role in place, we need to add the management role entry (the script we created above). To do this, use the command that follows.

Add-ManagementRoleEntry "Helpdesk Provisioning Script\New-CustomMailbox.ps1" -Parameters Name –UnScopedTopLevel

The next step is to create a role group with members (Help Desk members in our example) that will have access to the unscoped role we just created. Here is the command to accomplish that.

New-RoleGroup -Name "Helpdesk Provisioning" -Roles "Helpdesk Provisioning Script"

Finally, assign users to the role group that we just created (in our example, helpdesk1 is the user who will have access to the role). Use the Add-RoleGroupMember cmdlet to accomplish that task.

Add-RoleGroupMember -Identity "Helpdesk Provisioning" -Member helpdesk1

Help Desk user, helpdesk1, can now connect to the Exchange Management Shell. If a Help Desk user runs Get-ExCommand from the Exchange Management Shell, there is only one cmdlet available.

Note: Get-ExCommand is not available in a remote connection using Windows PowerShell, it is only available in the Exchange Management Shell.

The following cmdlet will be available to the Help Desk user:

[PS] C:\>get-excommand

CommandType     Name                                                Definition

-----------     ----                      ----------

Function        New-CustomMailbox.ps1    ...

Now the Help Desk user can execute the following to create the user as defined by script:

[PS] C:\>New-CustomMailbox.ps1 -Name user1

By now, you have probably picked up the fact that although the user is running a script that in turn runs the New-Mailbox cmdlet, the user doesn’t have direct access to the cmdlet that script is running. This prevents the user from directly creating new mailboxes while bypassing business logic that must be applied when creating a new mailbox.

Isn’t that some power in your hands? Isn’t that a feature that the Exchange Server team deserves kudos for?

~Bhargav

Thank you, Bhargav, for a great blog. Join me tomorrow for the Weekend Scripter.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Group and Format Output

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, teaches how to use Windows PowerShell to group and to format output.

Microsoft Scripting Guy, Ed Wilson, is here. One of the cool things about Windows PowerShell is that it allows you to work the way that you like to do so. The other day, I was making a presentation to the Charlotte PowerShell Users Group. The photo that follows shows me talking, and the Scripting Wife and Microsoft PFE Jason Walker at this first ever meeting.

Photo

One of the attendees asked, “Is Windows PowerShell a developer technology or a network administrator type of technology?” Before I could even answer the question, someone else jumped in and said that Windows PowerShell is really powerful, and that it has a number of things that would appeal to developers. However, he continued, the main thing about Windows PowerShell is that it allows you to process large amounts of data very quickly. Cool, I thought to myself; I did not see the need to add anything else to the conversation.

One of the fundamental aspects of working with data is grouping the data to enable viewing relationships in a more meaningful way. Earlier this week, I looked at using the Group-Object cmdlet to group information.

The Group-Object cmdlet does a good job of grouping Windows PowerShell objects for display, but there are times when a simple grouping might be useful in a table. The problem is that the syntax is not exactly intuitive. For example, it would seem that the command that is shown here would work.

Get-Service | Format-Table name, status -GroupBy status

When the command runs, however, the output (shown in the following image) appears somewhat jumbled.

Image of command output

In fact, the first time I ran across this, the output confused me because it looks like it is grouping the output. The second time I ran across this grouping behavior, the output seriously disappointed me because I realized that it was not really grouping the output. Then it dawned on me, I need to sort the output prior to sending it to the Format-Table command. I therefore, modified the command to incorporate the Sort-Object cmdlet. The revised command is shown here.

Get-Service | Sort-Object status | Format-Table name, status -GroupBy status

After it is sorted by the Status property, the service information displays correctly in the grouped table. This revised output is shown in the image that follows.

Image of command output

As might be expected, this non-grouping behavior also exists with the Format-List cmdlet, which is a cmdlet that also contains the GroupBy parameter. The code that follows appears to group the output, until one takes a closer look at the output.

Get-Service | Format-List name, status -GroupBy status

A look at the output (shown in the following image) shows that the grouping occurs only when concurrent services share the same status.

Image of command output

The fix for the grouped output from the Format-List cmdlet is the same as the fix for the Format-Table cmdlet—first sort the output by using the Sort-Object cmdlet, then pipe the sorted service objects to the Format-List cmdlet for grouping. The revised code is shown here.

Get-Service | sort-object status | Format-List name, status -GroupBy status

The revised command and the associated sorted output from the command are shown in the image that follows.

Image of command output

One of the cool things to do with the Format-List cmdlet is to use a ScriptBlock in the GroupBy parameter. Once again, it is necessary to sort the output prior to sending it to the Format-List cmdlet. In fact, you may need to sort on more than one parameter, as illustrated in the code that follows. (This code is a single line command that is broken at the pipe character for readability).

Get-Service | sort-object status, canstop |

Format-List name, status –GroupBy {$_.status -eq 'running' -AND $_.canstop}

To make the output easier to assess, I added the Unique switched parameter to the Sort-Object cmdlet to shorten the output. Interestingly enough, the first condition reports two services for the first condition. This is because each -AND combination equals False.

Image of command output

Format-Table also accepts a ScriptBlock for the GroupBy parameter. It works the same way that the Format-List behaves. The code that follows creates two tables, one that evaluates to False, and one that evaluates to True.

Get-Service | sort-object status, canstop -unique |

Format-table name, canstop, status -GroupBy {$_.status -eq 'running' -AND $_.canstop}

The image that follows illustrates creating a table that groups output based on a ScriptBlock.

Image of command output

Well, that is about all there is to grouping output information by using the Format-Table and the Format-List cmdlets.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>