Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Configure PowerShell Remoting and Use Remote Commands

$
0
0

Summary: Learn how to configure Windows PowerShell remoting, store credentials, and use remote commands.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I keep hearing about Windows PowerShell remoting. But I do not really know what Windows PowerShell remoting is. For example, I can use the Get-WMIObject to work with remote computers. Is that remoting? If so, what is the big deal? I could do that in Windows PowerShell 1.0. Is this just a bunch of typical marketing hype, or is there something that actually works here and I am missing it?

—GB

 

Hey, Scripting Guy! AnswerHello GB,

Microsoft Scripting Guy Ed Wilson here. GB, I hate to tell you, but you are missing the boat. There is a common misconception about what Windows PowerShell remoting really is. There are several ways to run a command on a remote computer:

  1. Windows Management Instrumentation (WMI) can target a remote computer via the computername parameter. The Get-WMiObject cmdlet also allows for alternate credentials.
  2. The computername cmdlets (not including the Get-WmiObject cmdlet). There are a lot of cmdlets that have a computername parameter. These cmdlets permit making a connection to a remote computer and retrieving information from them. However, not all of these cmdlets have a credential parameter and therefore they must run with administrator rights on the remote computer. In addition, in many cases these cmdlets require specific holes open in the firewall, and even certain services running on the remote machine before they will work properly. To find the cmdlets that have a computername parameter, use the following Windows PowerShell command:

    get-command -CommandType cmdlet | where { $_.definition -match 'computername'}
  3. Some computername cmdlets do permit the use of alternative credentials. These cmdlets allow you to run a command against a remote machine, and to specify the context in which to run the command. This solves the problem of supplying alternative credentials. To find these cmdlets, use the following Windows PowerShell command (gcm is an alias for the Get-Command cmdlet, and ? is an alias for the Where-Object cmdlet; the command is a single command, and I did not include any line continuation;it has wrapped in the output here, but would appear on a single line in the Windows PowerShell console):

    gcm -CommandType cmdlet | ? { $_.definition -match 'computername' -AND $_.definition -match 'credential'}
  4. True, Windows PowerShell remoting is the other way to run a Windows PowerShell command on a remote computer. Windows PowerShell remoting uses Windows Remote Management (WinRM) as the underlying technology, and is therefore firewall friendly. WinRM is Microsoft’s implementation of the WS-Management Protocol that allows hardware and operating systems from different vendors to interoperate. It is industry standard, and extremely powerful.

The first thing to do is to enable and configure Windows PowerShell remoting. To do this, use the Enable-PSRemoting cmdlet. Enable Windows PowerShell remoting on all machines that will communicate. If this only involves a couple computers, using the Enable-PSRemoting cmdlet works fine. But if you need to turn on Windows PowerShell remoting on an entire organizational unit, domain, or forest, it is better to use Group Policy. Unfortunately, there are myriad Group Policy settings from which to choose, and I like the convenience of the Enable-PSRemoting cmdlet. Therefore, I use a script that calls the Enable-PSRemoting cmdlet with Group Policy and assign it as a logon script. I discuss this technique in the Enable PowerShell Remoting to Enable Running Commands blog post.

After it is configured, I like to use the Invoke-Command cmdlet to ensure that Windows PowerShell remoting works properly. By running a simple command such as hostname.exe, I ensure that Windows PowerShell remoting works, and confirm the actual location that ran the command. Here is the syntax:

invoke-command -cn syddc01 -credential contoso\administrator -scriptblock {hostname}

After I have run that command, I know everything works. Now, I like to store my credentials in a variable to make it easier to run remote commands. To do this, I use the Get-Credential cmdlet. Here is the command I use:

$cred = Get-credential contoso\administrator

When the command runs, the following dialog box is displayed.

Image of dialog box displayed when command is run

After I have a credential object stored in the $cred variable, I use it to invoke a remote command. These commands are shown here:

$cred = Get-Credential contoso\administrator

invoke-command -cn syddc01 -cred $cred -script {hostname}

Now, I want to use the Invoke-Command cmdlet to run the Get-Process cmdlet on a remote computer and use alternate credentials. The syntax for this command is shown here (this command uses the cn computername alias for Invoke-Command, the credentials stored in the $cred variable, and the gps alias for the Get-Process cmdlet):

Invoke-Command -cn syddc01 -Credential $cred -ScriptBlock {gps}

The command and associated output are shown in the following figure.

Image of command and associated output

GB, that is all there is to getting Windows PowerShell remoting up and running, and storing credentials in a credential object to simplify running remote commands. Join me tomorrow when I will talk about creating remote Windows PowerShell sessions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

 


Learn How to Manage Remote PowerShell Sessions

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson discusses how to manage remote Windows PowerShell sessions.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I can see where being able to run a command on a remote computer might be kind of cool. But more often than not, I need to actually sit at the console of the remote computer. For this, I have been using remote desktop protocol (RDP) to display the remote desktop. I then open Windows PowerShell and do my work. It is okay, but a bit slow. I am wondering if there is a better way to do things?

—CH

 

Hey, Scripting Guy! AnswerHello CH,

Microsoft Scripting Guy Ed Wilson here. Well things are certainly getting exciting. It is now official—there is a Windows PowerShell Users group in Pittsburgh, Pennsylvania, in the United States. The first meeting is on December 13, 2011, from 6:00 P.M. until 8:00 P.M., and the Scripting Wife and I have been invited to the first meeting. Mark your calendars! If you are within driving range (or even a short flight), you do not want to miss this exciting presentation. The meeting will be held at the Microsoft office in downtown Pittsburgh. I will be speaking about Windows PowerShell best practices (and the Scripting Wife will be there signing autographs, and being nice and fun to talk to). Make sure you go to the PowerShell Community Groups and join the Pittsburgh PowerShell Users Group, and register for the meeting. This will allow the president of the group, Ken McFerron, to know how much food to order for the meeting.

CH, the first thing I do when working with remoting is use the Get-Credential cmdlet to store a credential object in a variable. This allows for a great deal of flexibility, and permits me to try various commands without the need to constantly type the user name and password. The command here prompts for the password for the administrator account of the Contoso domain:

$cred = Get-Credential -Credential contoso\administrator

If all I need to do is to run a few Windows PowerShell commands on the remote computer, I will go ahead and directly enter into a Windows PowerShell session. To do this, I use the Enter-PSSession cmdlet and specify both the remote computer name as well as the credentials to use. Here is the command I use to enter into a remote Windows PowerShell session:

Enter-PSSession -ComputerName syddc01 -Credential $cred

In the following figure, I store credentials in a variable, enter a remote Windows PowerShell session, and use the hostname.exe command to determine the name of the computer to which I am connected.

Image of determining name of computer to which connected

After I enter a remote Windows PowerShell session, I can use the Windows PowerShell cmdlets without worrying about firewall issues, remote credentials, or anything. The cmdlets work as if I were sitting at the Windows PowerShell console. For example, I can use Get-Process or Get-Service directly without specifying computername parameters. This is great because neither Get-Process nor Get-Service exposes a credential parameter, so it is more complicated to run the commands with alternate credentials.

In the following figure, I am connected to the remote server, SydDc01. I first examine the explorer process. Next, I look for services that contain the letters WMI. Next, I look for services that contain the letters ws. I then exit the remote session, and run the same commands on the local host. The great thing is the remote session, and the local session share the same Windows PowerShell command history. Therefore, I can simply up-arrow to retrieve the previous commands.

Image of being connected to remote server, SydDc01

I can even enter a remote Windows PowerShell session on another computer, and still use the same history of commands. This makes it really convenient to run the same commands on multiple computers.

To see if there are any Windows PowerShell sessions running, I use the Get-PSSession cmdlet. This is the syntax:

Get-PSSession

One of the things I like to do is to store a session. This allows me to enter and leave the remote Windows PowerShell session without worrying about overhead. To do this I use the Get-Credential cmdlet to obtain a credential object to use with the remote computer for creating a remote Windows PowerShell session. I then store the credential object in a variable. Syntax for this command is shown here:

$cred = Get-Credential

Next, I use the New-PSSession cmdlet to create a new session to a remote machine. I specify a name for the session, and the computername of the remote computer. For credentials, I use the credentials I stored in the $cred variable. I store the returned PSSession object in a variable:

$syddc01 = New-PSSession -Name syddc01 -ComputerName syddc01 -Credential $cred

Now that I have a PSSession stored in a variable, I can use the Enter-PSSession cmdlet to enter into a remote Windows PowerShell session. Here is the command I use to enter the PSSession stored in the $syddc01 variable:

Enter-PSSession $syddc01

I can now work in the Windows PowerShell remote console as if I were working on my local computer. I can use the Get-Service cmdlet to return information about the bits service. I can also use wildcard characters with Get-Service to display information about every service that matches a wildcard pattern. I can then exit the Windows PowerShell session by using the exit command. Next, I can use the same commands on my local computer, and then return to the remote session by again using the Enter-PSSession cmdlet. I can again exit the session, and run commands locally. These commands are shown here:

Get-Service -name bits 

 

Exit

Get-Service -Name *ii*

Enter-PSSession $syddc01

Get-Service *net*

Exit

Get-Service *net*

The commands and associated output are shown in the following figure.

Image of commands and associated output

After I am finished with my remote work, I remove the PSSession; removing unused PSSessions frees up resources. The easy way to remove a PSSession is to pipe the results of Get-Session to Remove-PSSession. This command is shown here:

Get-PSSession | Remove-PSSession

CH, that is all there is to reusing a Windows PowerShell remote session. Join me tomorrow as I talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 

Use the PowerShell Passthru Parameter and Get Back Objects

$
0
0

Summary: Learn how to use the passthru parameter in Windows PowerShell to return objects from commands and allow more management tools.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have got a rather curious question that I have not been able to find anything about. What is up with the passthru parameter? I mean, I see it on some commands, and not on other commands. Also, I have no idea what it really does, but when I see it, it seems to do cool stuff. But when I try to use it, all I do is get errors. Is this some secret Microsoft trick?

—ML

 

Hey, Scripting Guy! AnswerHello ML,

Microsoft Scripting Guy Ed Wilson here. This is an exciting day! Yesterday, I announced that Pittsburgh will have its first PowerShell Users Group meeting on December 13, 2011. Today, I get to announce that Charlotte, North Carolina, has also formed a PowerShell Users Group. They will have their first meeting in January.

ML, you are right, the passthru parameter seems to be mysterious. Perhaps a few examples will show how it works. First of all, passthru is not one of the common parameters, and it does not exist everywhere. The common parameters are:

  • -Verbose
  • -Debug
  • -WarningAction
  • -WarningVariable
  • -ErrorAction
  • -ErrorVariable
  • -OutVariable
  • -OutBuffer

There are also two parameters that are available when a command will change system state (such as Start-Process, Stop-Process). The two risk mitigation parameters are:

  • -WhatIf
  • -Confirm

To find all of the Windows PowerShell cmdlets that have a passthru parameter, I use the Get-Command cmdlet. I then pipe the resulting cmdletInfo object to the Where-Object and look for matches on passthru. The resulting command is shown here (in the following command, gcm is an alias for the Get-Command cmdlet; a commandtype of 8 is a cmdlet; I use the ? as an alias for the Where-Object cmdlet):

gcm -CommandType 8 | ? {$_.definition -match 'passthru'}

When I pipe the results from this command on Windows PowerShell 2.0 with no added modules or snap-ins, it returns 44. This means that, by default, there are 44 cmdlets that use a passthru parameter.

So, what does passthru do for me? For example, there are many Windows PowerShell cmdelts that simply work, and they do not return any data. An example is the Start-Process cmdlet. Here is an example of using the Start-Process cmdlet to start Notepad. Notice, that the line following the command is empty; this is because nothing is returned from the command:

PS C:\> Start-Process notepad

PS C:\>

If I add the passthru switched parameter to the end of the command, a Process object returns to the Windows PowerShell console. The nice thing about this is that I can use this Process object to track and work with the newly created instance of Notepad. The command to start the Notepad process and to return a Process object to the Windows PowerShell console is shown here:

Start-Process notepad –PassThru

The command and associated object is shown in the following figure.

Image of command and associated object

If I store the returned Process object in a variable, I can then use it to obtain additional information about the process. In the following code, I store the returned Process object in a variable named $notepad. I then examine the start time of the process, and finally I stop the process by piping the Process object to the Stop-Process cmdlet:

$notepad = Start-Process notepad –PassThru

$notepad.StartTime

$notepad | Stop-Process

The commands and associated output are shown in the following figure.

Image of commands and associated output

Another cmdlet that contains a passthru parameter is the Copy-Item cmdlet. When I use the cmdlet to copy a file from one location to another location, nothing returns to the Windows PowerShell console. In the following command, I copy the a.txt file from the c:\fso folder to the C:\fso31 folder. Nothing is returned to the Windows PowerShell console:

Copy-Item -path C:\fso\a.txt -Destination C:\fso31

If I would like to see information about the copied file, I use the passthru switched parameter. The revised syntax is shown here:

Copy-Item -path C:\fso\a.txt -Destination C:\fso31 –PassThru

The command and associated output are shown in the following figure.

Image of command and associated output

The returned object is an instance of a FileInfo object. To work with the file, I store the returned FileInfo object in a variable named $text. I can now directly access any of the properties of the FileInfo object. My favorite property is the FullName property that points to the complete file name as well as the path to the file. Once I am finished working with the file, I can easily remote it by piping the FileInfo object stored in the $Text variable. These commands are shown here:

$Text = Copy-Item -path C:\fso\a.txt -Destination C:\fso31 -PassThru

$text.GetType()

$text.FullName

$text | Remove-Item

The commands and associated output are shown in the following figure.

Image of commands and associated output

By default, when using the Import-Module cmdlet to import a module into the current Windows PowerShell session, nothing is returned. In the following example, I import my Unit Conversion Module; nothing appears on the Windows PowerShell console line:

PS C:\> Import-Module conv*

PS C:\>

Now, I will remove the unit conversion module, and try it again. This time, I will use the passthru parameter. The revised command results in a PSModuleInfo object returned to the Windows PowerShell console. This is useful because it tells the name of the module imported, and it lists the commands exported via the module. The commands and associated output are shown in the following figure.

Image of commands and associated output

 

Well, ML, as you can see, using the passthru parameter forces Windows PowerShell to go ahead and pass newly created or modified objects instead of hiding them. By knowing when to use and not to use the parameter, great flexibility is gained.

That is all there is to using the passthru parameter. Join me tomorrow for more cool stuff about Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 

 

Query AD for Computers and Use Ping to Determine Status

$
0
0

Summary: Learn how to use Windows PowerShell to query Active Directory for computers, ping for status, and display in green/red.

 

Microsoft Scripting Guy Ed Wilson here. While the Scripting Wife and I were out in California speaking to the Microsoft Premier Field Engineers (PFE) about Windows PowerShell, a question arose. The PFE said he had a customer that needed to send a ping to a number of computers. The customer did not need a lot of information about the status, but more of a simple yes or no report on connectivity. Ping was the tool of choice due to simplicity. The problem is that Ping returns too much information. What the customer really wanted was an output that displayed the computer name in green (for up) and in red (for down). A simple status board to display the computer names. One problem, the PFE related, is that the collection of computers to monitor changes on nearly a daily basis. Therefore, the customer wants to query Active Directory for the computer names.

We left the reception Thursday night, and I went up to the room and put together this script. It could have been a “one-liner” but, it would have been difficult to read, so I spread it out over a couple of different lines. Here is the complete Query Active Directory and Ping Computers script in Windows PowerShell (for ease of use, I uploaded the script to the Scripting Guys Script Repository):

Query Active Directory and Ping Computers

Import-Module active*

$rtn = $null

Get-ADComputer -Filter * |

ForEach-Object {

  $rtn = Test-Connection -CN $_.dnshostname -Count 1 -BufferSize 16 -Quiet

  IF($rtn -match 'True') {write-host -ForegroundColor green $_.dnshostname}

  ELSE { Write-host -ForegroundColor red $_.dnshostname }

}

The first thing the Query Active Directory and Ping Computers script does is import the ActiveDirectory module. I have written about this module quite a bit. In the Install Active Directory Management Service for Easy PowerShell Access post, I go into detail about the requirements to set up the Active Directory Management gateway. Next, I set the value of the $rtn variable to $null. This helps to avoid problems if there is already a $rtn variable with a different value.

I then use the Get-ADComputer cmdlet from the ActiveDirectory module (I have used the Get-ADComputer cmdlet numerous times on the Hey, Scripting Guy! Blog) to return all computers. The following line of code returns an ADComputer object from the Microsoft.ActiveDirectory.Management namespace:

Get-ADComputer -Filter *

By default, the ADComputer object contains only a few properties. The default properties are shown here:

DistinguishedName                                                                   

DNSHostName                                                                         

Enabled                                                                             

Name                                                                                

ObjectClass                                                                         

ObjectGUID                                                                          

SamAccountName                                                                      

SID                                                                                 

UserPrincipalName  

Other properties exist on a computer object. For example, the following figure illustrates properties that contain a value for the W7Client computer. The figure from ADSI Edit is shown here.

Image of properties containing a value for W7Client computer

To obtain access to additional property values, I need to add them to the property parameter (the reason for the small subset of properties returned by default is because of performance concerns). Here is an example of returning the DNSHostname and the lastlogon attribute values.

Get-ADComputer -Filter * -Properties lastlogon | select dnshostname, lastlogon

Unfortunately, the Test-Connection cmdlet does not accept piped input, and I need to use the Foreach-Object cmdlet to walk through the collection of ADComputer objects and ping the DNSHostname of each computer. To speed things along, I send one ping and reduce the BufferSize. I store the results in a variable named $rtn:

ForEach-Object {

  $rtn = Test-Connection -CN $_.dnshostname -Count 1 -BufferSize 16 –Quiet

The Test-Connection returns a True or a False depending upon whether the connection succeeds. But unfortunately, it is a string and not a true Boolean value. Therefore, in my decision portion of the script, I used the IF statement and did a match on the string True. If Test-Connection returns true, I display the DNSHostname of the computer in green. Otherwise, I display the DNSHostname of the computer in red. This portion of the script is shown here:

ForEach-Object {

  $rtn = Test-Connection -CN $_.dnshostname -Count 1 -BufferSize 16 -Quiet

  IF($rtn -match 'True') {write-host -ForegroundColor green $_.dnshostname}

  ELSE { Write-host -ForegroundColor red $_.dnshostname }

}

The script and associated output are shown in the following figure.

Image of script and associated output

 

Well, that is about all there is to querying Active Directory Domain Services for computer accounts, and pinging them to see if they are up and running or not. Join me tomorrow for more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 

 

Change a PowerShell Preference Variable to Reveal Hidden Data

$
0
0

Summary: Change the $FormatEnumerationLimit Windows PowerShell preference variable and display more data in the console.

 

Microsoft Scripting Guy Ed Wilson here. It is approaching the holiday season in Charlotte, North Carolina, in the United States. From now until the middle of January, 2012, many companies are on “IT lockdown” and are not making any changes. In fact, for many companies, they have been on IT lockdown for more than a month. There are several reasons for this lockdown: One reason is because of the number of people taking their vacation during the months of November and December. Another reason is because of the number of year-end reports that need to run during this time of the year. As a result of the critical reports that run, many companies do not want to risk anything adverse happening to their IT infrastructure, so they freeze any changes until after the new year.

One big advantage of having an IT lockdown towards the end of the year is it provides time for IT pros to take advantage of either formal or informal training opportunities. Labs are built, scenarios are tested, and much learning takes place.

At the Scripting Household, we are also in IT lockdown mode, and the Scripting Wife is ensuring no unplanned outages occur because of infrastructure changes. This also means I have time to experiment and to learn new things.

I recently found something in the Windows PowerShell help files I had either not previously noticed or had forgotten. I was reading the about_Preference help topic. What I found is the $FormatEnumerationLimit preference variable. By default the $FormatEnumerationLimit preference variable has a value of 4, and it determines how many items are displayed when a property contains more than a single item.

To obtain the current $FormatEnumerationLimit, I directly query the variable. In the following figure, I query the $FormatEnumerationLimit. Next, I use the Get-Service cmdlet to return information about all services that begin with the letters win. I first pipe the results to the Format-Table cmdlet and choose the name and the dependentServices property. I use the autosize parameter to tighten up the display. Next, I repeat the command and pipe the results to the Format-List cmdlet. In both cases, there is plenty of room in the Windows PowerShell console window to display additional DependentServices, but the space is not utilized because the number of items enumerated is limited to four, which is the default setting of the $FormatENumerationLimit preference variable. The three commands are shown here:

$FormatEnumerationLimit

get-service -Name win* | format-table name, dependent* -AutoSize

get-service -Name win* | format-list name, dependent*

The commands and associated output are shown in the following figure.

Image of commands and associated output

I change the value of the $FormatEnumerationLimit variable to 20. Next, I retry my two Get-Service commands. The three commands are shown here:

$FormatEnumerationLimit = 20

get-service -Name win* | format-table name, dependent* -AutoSize

get-service -Name win* | format-list name, dependent*

The commands and associated output are shown in the following figure.

Image of commands and associated output

When $FormatEnumerationLimit is set to the default value of 4, a command to retrieve all processes that begin with the letter w; sort based upon pagedmemorysize; and display a table containing the name and all properties that begin with the letters page; and the threads fit neatly in a table. The problem is that the threading information truncates after four thread values. The command is shown here:

Get-Process w* | sort pagedmemorysize | ft name, page*, threads -Wrap –AutoSize

The command and associated output are shown in the following figure.

Image of command and associated output

When the same command runs with the $FormatEnumerationLimit set to 20, the output spreads out more. The advantage is that all the thread IDs appear in the output.

Image of thread IDs in the output

 

Well, this is the easy way to see behind the ellipsis in some of the output. I do not think I will add the command to my profile, but it is definitely something to keep in mind when I want to see more output, but I do not want to use the Select-Object –expandproperty command. I will see you tomorrow when I begin a new week on the Hey, Scripting Guy Blog!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

Make a Simple Change to PowerShell to Prevent Accidents

$
0
0

Summary: Learn how a simple change to a preference variable can help prevent accidental changes when using Windows PowerShell.

 

Microsoft Scripting Guy Ed Wilson here. When I was teaching my Windows PowerShell Best Practices class out in Irvine, California, recently, I was talking to the class about the whatif parameter. I did my usual introduction to the feature, which goes something like this:

“How many of you have ever typed a command at the command line, and you did not know in advance exactly what the command would do?” (Most of the hands in the room rise). “Okay, now, how many of you have ever done that on a production server?” (Most of the hands lower, but a considerable number are still up. I note for the benefit of the class that my hand is still up.)

One reason so many hands were up was that, when troubleshooting servers or attempting to repair servers that have problems, it is common to find a TechNet article that says open the command prompt and type these dozen cryptic commands and press Enter. It is not that network administrators are cowboys who go riding happily off into uncharted territory, but it is that they are put into the unenviable situation of having to make hard choices. Type the command and hope for the best, or suffer along and hope the server stays up for a bit longer.

I boldly proclaim that if they use the whatif switch on Windows PowerShell commands, it will add at least a fractional percentage point to their system up time. I have no research to back this up, but it does stand to reason. A student in California raised her hand, and asked this:

“But what if the admin does not use the whatif switch? Then what?”

Well, at first, I felt like saying, “I guess you are out of luck.” But then I remembered a little-known and little-used preference variable—the $WhatIfPreference variable. The $WhatIfPreference variable is hanging out on the Variable drive with a value of false. This is shown in the following code:

PS C:\> dir Variable:\WhatIfPreference

Name                                       Value

WhatIfPreference                       False

What the $WhatIfPreference variable does is flip the whatif parameter to on, for every Windows PowerShell cmdlet that supports a whatif switch. This means that every cmdlet that changes system state will no longer change system state by default. To illustrate, I am going to start an instance of Notepad. I will then use the Get-Process to retrieve the process, and pipe it to Stop-Process—and the Notepad process goes away. This is shown here:

PS C:\> $ErrorView = "CategoryView"

PS C:\> notepad

PS C:\> get-process notepad | Stop-Process

PS C:\> get-process notepad

ObjectNotFound: (notepad:String) [Get-Process], ProcessCommandException

Next, I change and use the whatif parameter when calling the Stop-Process cmdlet. The whatif parameter, lets me know that it would stop an instance of the Notepad process, the one with the process ID of 6052, if the command ran without the whatif parameter. I then use the Get-Process cmdlet to confirm that the instance of Notepad with the process ID of 6052 still runs. These commands and associated output are shown here:

PS C:\> notepad

PS C:\> get-process notepad | Stop-Process -WhatIf

What if: Performing operation "Stop-Process" on Target "notepad (6052)".

PS C:\> get-process notepad

 

Handles             NPM(K)             PM(K)               WS(K)               VM(M)              CPU(s)               ID ProcessName

79                        9                        4220                 8976                  82                     0.03                    6052 notepad

As my student asked, what happens when I forget to use the whatif statement? Well of course, the Notepad process goes away, and there is no prompt, no anything. There’s not even any sign that the process no longer runs. The only way to confirm that Notepad went away is to use Get-Process. These commands and associated output are shown in the following figure.

Image of commands and associated output

The solution is to turn on the $WhatIfPreference, which I do by setting the value to $true:

$WhatIfPreference = $true

After the $WhatIfPreference is set to $true, any command that would change system state (and therefore any cmdlet that supports the whatif switched parameter) runs with whatif, and therefore does not actually execute the command, as shown here:

PS C:\> $WhatIfPreference = $true

PS C:\> notepad

PS C:\> get-process notepad | Stop-Process

What if: Performing operation "Stop-Process" on Target "notepad (7840)".

PS C:\>

This includes commands to create a new folder (because it makes a change to the system). This is shown here:

PS C:\> md c:\fso4

What if: Performing operation "Create Directory" on Target "Destination: C:\fso4".

PS C:\> test-path c:\fso4

False

If I want to execute a command that changes system state, I need to set the value for whatif to false. When doing this, I must use $False, and not simply false:

PS C:\> get-process notepad | Stop-Process -whatif:false

InvalidArgument: (:) [Stop-Process], ParameterBindingException

PS C:\> get-process notepad | Stop-Process -whatif:$false

PS C:\> get-process notepad

ObjectNotFound: (notepad:String) [Get-Process], ProcessCommandException

Keep in mind that forcing whatif to $False is per command, so a subsequent call to a command that changes system state will execute the whatif behavior unless I once again override the value. This is shown here:

PS C:\> md c:\fso4

What if: Performing operation "Create Directory" on Target "Destination: C:\fso4".

PS C:\> md c:\fso4 -WhatIf:$false 

 

    Directory: C:\ 

 

Mode               LastWriteTime                            Length Name

d----                 11/11/2011   6:57 PM                 fso4

 

PS C:\> test-path c:\fso4

True

The above commands and associated output are shown in the following figure.

Image of commands and associated output

After I close Windows PowerShell and open it up again, the value of the $WhatIfPreference variable resets to $False. I no longer have the added protection of the whatif switch on by default. This appears in the following figure.

Image of $WhatIfPreference variable resetting to $False

The solution, of course, is to add the $WhatIfPreference to the Windows PowerShell profile. I have a number of Hey, Scripting Guy! Posts that talk about working with Windows PowerShell profiles or adding items to profiles. I also have an excerpt from my Windows PowerShell 2.0 Best Practices book that covers the different Windows PowerShell profiles. You will need to decide if you want to enable the preference variable on workstations, servers, or nothing—or for just certain users. Personally, I do not have it enabled on any of my systems, but there have been a couple of times when it might have come in handy. For highly available systems, it might be a very good thing to implement.

 

That is all there is to using the $WhatIfPreference variable. Join me tomorrow for more Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 

 

Use the PowerShell Debugger to Troubleshoot Scripts

$
0
0

Summary: Learn how to use the Windows PowerShell script debugger to troubleshoot problems with scripts.

 

Microsoft Scripting Guy Ed Wilson here. One of the fun things about traveling, especially to warm places when it is winter back home, is calling to talk to friends and relatives. When they say things like, “It snowed yesterday,” I grimace a little and reluctantly tell them it was 70 degrees Fahrenheit (21 degrees Celsius according to my unit conversion module) in sunny Southern California. But the best thing is getting to work with customers and talking to people about Windows PowerShell.

Invariably, when I am talking to people about writing Windows PowerShell scripts, someone comes up with the question about script debugging. To be honest, I rarely fire up a debugger. Never have, even back in the VBScript days. I generally write code in such a way that when a problem occurs, it is obvious where the problem lies and how to correct it. Every once in a while, however, the problem is not obvious, and being able to actually debug the script comes in helpful.

In Windows PowerShell 2.0, we introduced several Windows PowerShell cmdlets that make it easier to debug scripts. Today, I want to spend a little time looking at some of the things to do with one of the cmdlets.

Debugging a Windows PowerShell script often involves setting a breakpoint, which is something that causes the Windows PowerShell script to pause execution. When the script pauses execution, the Windows PowerShell console drops into debug mode. This special mode permits the use of certain commands. The commands appear in the table that follows (this table is copied from my Microsoft Press book, Windows PowerShell 2.0 Best Practices book).

Keyboard shortcut

Command name

Command meaning

s

Step-into

Executes the next statement and then stops.

v

Step-over

Executes the next statement, but skips functions and invocations. The skipped statements are executed, but not stepped through.

o

Step-out

Steps out of the current function up one level, if nested. If in the main body, it continues to the end or the next breakpoint. The skipped statements are executed, but not stepped through.

c

Continue

Continues to run until the script is complete or until the next breakpoint is reached. The skipped statements are executed, but not stepped through.

l

List

Displays the part of the script that is executing. By default, it displays the current line, five previous lines, and 10 subsequent lines. To continue listing the script, press Enter.

l <m>

List

Displays 16 lines of the script beginning with the line number specified by <m>.

l <m> <n>

List

Displays <n> lines of the script, beginning with the line number specified by <m>.

q

Stop

Stops executing the script, and exits the debugger.

k

Get-PsCallStack

Displays the current call stack.

<Enter>

Repeat

Repeats the last command if it was Step-into (s), Step-over (v), or List (l). Otherwise, represents a submit action.

h or ?

Help

Displays the debugger command Help.

 

There are several ways to configure breakpoints. For the next several examples, I am going to use the script that is shown here:

MyDebug.ps1

$cn = "localhost"

$process = "notepad"

Get-WmiObject -Class win32_bios -cn $cn

Start-Process $process

Get-Process $process

Keep in mind that the path to the script matters. I am going to set a breakpoint on a command on a script that appears in a specific location. If I run the script with the same name from a different location, the script will not break upon that command.

When the command Get-Process cmdlet appears, the script breaks and enters debug mode. In the code that follows, I use the Set-PSBreakpoint cmdlet to set the breakpoint on the Get-Process cmdlet for the script c:\fso\psdebug.ps1. The code and associated output are shown here:

PS C:\> Set-PSBreakpoint -Command get-process -Script C:\fso\mydebug.ps1

 

  ID Script                      Line Command                     Variable

  -- ------                      ---- -------                     --------

   0 mydebug.ps1                      get-process

Now, I execute the Windows PowerShell script. When the script hits the command Get-Process, it does not execute the command. Instead, it enters the debugger. I now have access to the commands listed in the table above.

I type the letter L to list the portion of the script that executes. By default, this displays the current line, five previous lines, and 10 subsequent lines. After examining the code, I decide to continue with script execution, and I press the letter C to continue running the script. In this example, it runs the Get-Process cmdlet, and ends the script, because the Get-Process cmdlet is the last line in the script. The setting of the breakpoint, launching of the script, using the L and the C commands in the debugger, and execution of the last line in the script are shown in the following figure.

I next decide to use the Set-PSPreakpoint cmdlet to set another breakpoint. This time, I break on a variable. The key thing to remember here is that when specifying a variable as a breakpoint, do not include the $ prefix; instead, use the variable name without the dollar sign. The command is shown here:

Set-PSBreakpoint -script c:\fso\mydebug.ps1 -Variable cn

When the script reaches the cn variable, it enters the debugger. I change the value assigned to the cn variable from “localhost” to “mred” and use the C command to continue execution of the script. After the script executes a couple lines of code, the script enters the second breakpoint—the breakpoint I first set on the Get-Process command. Once again, the script enters the debugger, and once again I use the C command to continue script execution. These commands and associated output are shown in the following figure.

Image of commands and associated output

Now, I decide to see how many breakpoints I have set. I use the Get-PSBreakpoint cmdlet:

Get-PSBreakpoint

The command and associated output are shown here:

PS C:\> Get-PSBreakpoint

 

  ID Script                      Line Command                     Variable

  -- ------                      ---- -------                     --------

   0 mydebug.ps1                      get-process

   1 mydebug.ps1                                                  cn

Now, I decide to get rid of all the breakpoints. To do this, I use the Get-PSBreakpoint cmdlet and pipe it to the Remove-PSBreakpoint cmdlet. Next, I use Get-PSBreakpoint to ensure I removed all the breakpoints. The commands and associated output are shown here:

PS C:\> Get-PSBreakpoint | Remove-PSBreakpoint

PS C:\> Get-PSBreakpoint

 

That is all there is to using a command to break into a script. Join me tomorrow when I will continue talking about debugging Windows PowerShell scripts.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 

Use the PowerShell Debugger to Check Variable Values

$
0
0

Summary: Learn how to inspect variable values by using the Windows PowerShell debugger.

 

Microsoft Scripting Guy Ed Wilson here. In yesterday’s post, I talked about using the Set-PSBreakpoint cmdlet to set a breakpoint on a specific script. Today, I want to continue looking at the Set-PSBreakpoint cmdlet.

One of the things I mentioned was that when setting a breakpoint on a script, the script specified must be the same script that breaks. For example, if I set a breakpoint for the c:\fso\mydebug.ps1 script, the only script that breaks when a breakpoint reaches is the c:\fso\mydebug.ps1 script. This is the exact script, from the exact location.

If, on the other hand, I do not specify the script when I create the breakpoint, any script I run causes the breakpoint to trigger. To test this out, I create a script with several commands in it. The script is shown here:

Get-Process.ps1

Get-EventLog application -Newest 1

get-process powershell

Get-Date

I also use the test script from yesterday. The mydebug.ps1 script is shown here:

MyDebug.ps1

$cn = "localhost"

$process = "notepad"

Get-WmiObject -Class win32_bios -cn $cn

Start-Process $process

Get-Process $process

I use the Set-PSBreakpoint cmdlet to create a breakpoint that will break when the command Get-Process appears in a script. The command and associated output are shown here:

PS C:\> Set-PSBreakpoint -Command get-process

 

  ID Script                      Line Command                     Variable

  -- ------                      ---- -------                     --------

   1                                  get-process

 

 

PS C:\>

When the Get-Process command from the MyDebug.ps1 script appears, the debugger breaks into the script. This allows for the use of debugging commands (the commands appear in a table from yesterday’s article). The C command tells the debugger to continue. In the Get-Process.ps1 script, the script also contains a call to Get-Process, so that script causes the breakpoint to break as well. These two scripts and the associated output including debugger are shown in the following figure.

Image of two scripts and associated output including debugger

One of the really cool things to do with a breakpoint is to specify an action to take when a condition occurs. The action to occur is placed inside a scriptblock. In the following command, I specify that when the Get-Process command appears, the debugger will start the notepad process:

Set-PSBreakpoint -Command get-process -Action {notepad}

I modify the Mydebug.ps1 script so that I comment out the Start-Process line. The revised script is shown in the following figure.

Image of modified Mydebug.ps1 script

When run, the script generates an error because it attempts to use Get-Process to retrieve a nonexistent process. In the breakpoint previously specified, however, when the breakpoint is reached, the scriptblock creates the notepad process, so the script works without causing an exception. This is shown in the following figure.

Image of script working without causing an exception

When debugging a script, often I am concerned about the value of a variable. There are three ways to break on a variable. The three ways are read, write, and readwrite. The default value is write. When working with breakpoints on variables, read or write do not talk about the way the variable is declared in the script, but each determines when the script breaks. For example, when breaking on a variable in write mode, it means the script breaks before a new value is written to a variable. In the case of a variable that is not yet declared, the script breaks just before a value is written or assigned to the variable. I used the code that follows to set a breakpoint on the variable cn when a value is written to the variable:

Set-PSBreakpoint -Variable cn -Mode write

I run the script, and the script breaks before the value “localhost” is assigned to the $cn variable. I then check the value of the $cn variable; nothing is displayed because the variable has not been created. Next, I use the L command to see the code, and I see the script is on the first line, waiting to assign a value to the $cn variable. I then use the C command, and continue execution of the script. The error is generated because I have commented out the Start-Process command (this can be seen in the code listing from the previous command).

Image of generated error

In the following figure, I run the C:\fso\debug.ps1 script. The script hits the $cn variable breakpoint and breaks into the script. I check the value of the $cn variable and see it has not been set; thereforem nothing returns. I then assign the value mred to the $cn variable, and I use the L command to view the code. Next, I use the S command to step to the next line in the script. I see that it sets the value notepad for the $process variable. I then step (S) to the next line in the script where I see that it will make a WMI call to retrieve BIOS information via the Win32_BIOS WMI class. I then step past that (S) and see that the script is getting ready to retrieve process information about the notepad process. However, Notepad is not running because the Start-Process command is commented out. I therefore type Start-Process $Process and then step into the remainder of the script.

Image of running the C:\fso\debug.ps1 script

When I break on a variable in read mode, the value of the variable has already been read. This means I can inspect the value of the variable to ensure the script works as intended. In the code that follows, I remove all the current breakpoints, and then I create a new breakpoint that watches the variable $cn to see when it reads the value of the variable. When the value of the $cn variable is read, the script breaks and enters the debugger:

Get-PSBreakpoint | Remove-PSBreakpoint

Set-PSBreakpoint -Variable cn -Mode read

When I query the value of the $cn variable inside the debugger, I see that the variable contains the value “localhost.” That is what I want, so I step over the remainder of the code, thereby executing the remainder of the script. These commands and associated output appear in the following figure.

Image of commands and associated output

That’s it for now. Join me tomorrow for more fun with Windows PowerShell.

 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

 

Ed Wilson, Microsoft Scripting Guy

 


Use the Debugger in the Windows PowerShell ISE

$
0
0

Summary: Learn how to use the debugging tools in the Windows PowerShell ISE to speed development of scripts.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to talk a little bit about using the Windows PowerShell ISE to debug a script. This is actually the third article this week in which I talk about using Windows PowerShell to debug scripts. In the first article, I talked about working at the Windows PowerShell console and using the Windows PowerShell debugger to help with debugging scripts. I discussed setting breakpoints on scripts by using the Set-PSBreakPoint cmdlet. Next, I talked about setting breakpoints on variables and examining the variables from inside the debugger. I also talked about specifying an action to take when a breakpoint is reached.

One thing to keep in mind when you are working with the debugger in the Windows PowerShell ISE, is that it is still the same Windows PowerShell debugger. For example, if I am working on a Windows PowerShell script, but I have not yet saved the script with a file name, I cannot set a breakpoint; this option is not available.

Once I have saved the script with a file name, I can select a line, and use the Toggle Breakpoint action from the Debug menu to set a breakpoint on the specific line. Once set, the line changes color. When I run the script and the breakpoint is hit, the script enters debugger mode. I can use the immediate window (the execution pane that is normally the bottom pane) to type commands for the debugger. The output pane (normally the middle pane) shows that the script is in debugger mode, and it displays the current output. I can use the normal debugger commands to step into, step over, list the call stack, or other actions that are detailed in the following table.

(Note: This table is copied from my Microsoft Press book, Windows PowerShell 2.0 Best Practices.)

 

Keyboard shortcut

Command name

Command meaning

s

Step-into

Executes the next statement and then stops.

v

Step-over

Executes the next statement, but skips functions and invocations. The skipped statements are executed, but not stepped through.

o

Step-out

Steps out of the current function up one level if nested. If in the main body, it continues to the end or the next breakpoint. The skipped statements are executed, but not stepped through.

c

Continue

Continues to run until the script is complete or until the next breakpoint is reached. The skipped statements are executed, but not stepped through.

l

List

Displays the part of the script that is executing. By default, it displays the current line, five previous lines, and 10 subsequent lines. To continue listing the script, press ENTER.

l <m>

List

Displays 16 lines of the script beginning with the line number specified by <m>.

l <m> <n>

List

Displays <n> lines of the script, beginning with the line number specified by <m>.

q

Stop

Stops executing the script, and exits the debugger.

k

Get-PsCallStack

Displays the current call stack.

<Enter>

Repeat

Repeats the last command if it was Step (s), Step-over (v), or List (l). Otherwise, represents a submit action.

h or ?

Help

Displays the debugger command Help.

One thing that is a bit annoying when debugging a script with the Windows PowerShell ISE is that debugging commands that are typed while in debug mode do not appear in the ISE output pane like they do when using the Windows PowerShell debugger in the Windows PowerShell console. If I query a variable, or set a value for a variable, those commands appear in the output pane, but commands from the previous table do not appear.

In the following image, I set a breakpoint for the second line of the script by using the Toggle Breakpoint command from the Debug menu. I then ran the script. It hit the breakpoint on the second line and entered debug mode. This is indicated in the output pane as [DBG]. Next, I used the l (L) command to list the lines from the script. The output from this command is visible, but there is no indication of the command that was typed. I then queried the value of the $a variable a second time, and both the command and output appeared. Finally, I used the o (O) command to step over the last line of code, and the script exited.

Image of script

To remove all the breakpoints in a script, I can choose the Remove All Breakpoints command from the Debug menu. I can also use the Get-PSBreakpoint cmdlet to get all the breakpoints, and then use Remove-PSBreakpoint to remove the breakpoints, as shown here:

Get-PSBreakpoint | Remove-PSBreakpoint

These commands are shown in the following image.

Image of script

So, how is all this helpful? For one thing, I can use this to see what Windows PowerShell thinks is going to happen before it actually happens. I can also see what actually took place, just after it happened. In the following image, I am still using the single breakpoint. When the script breaks on line 2, it has not yet executed line 2. First, I check the value that is stored in $a. That value is 55, which according to the script, is correct. Next, I look to see what is stored in variable $b, and it reports back as 29. This should actually be null because the second line has not yet executed.

I figure out that the value comes from the previous time I ran the script. I then change the value of $b to 45. I query the $b variable, and sure enough, it is 45. I then type the s (S) command in the debugger, to step into the line and to actually execute the second line of code. I query the value of $b, and I see that it is now set back to 29. This proves that the debugger breaks before executing the line of code. I then set it back to 45, and query value of the variable, and I see that it is now set to 45. When the script finishes running, I check the value of $c, and see that it is 100 (however, this output is off screen in the following image).

Image of script

From the Debug menu, I can only toggle a breakpoint on a line in the script. If I want to do something more sophisticated (such taking an action when a variable value is written to), I need to use the Set-PSBreakPoint cmdlet like I used in yesterday’s Hey! Scripting Guy Blog.

In the following image, I use the Set-PSBreakpoint cmdlet to write out the value of the variable $c to the console in blue, when the value of the $c variable is written to. Here is the Set-PSBreakPoint cmdlet command I use:

Set-PSBreakpoint -Variable c -Mode write -Action {write-host $c -f blue}

After I set the breakpoint, I run the script. When the breakpoint is reached, the action portion of the command executes, and the value contained in the $c variable is written to the output pane in blue. I then use the List Breakpoints command from the Debug menu (this is the same as typing the Get-PSBreakPoint command) to display all breakpoints. As seen in the following image, only one breakpoint is currently in effect.

Image of script

As you can see, working with the debugger in the Windows PowerShell ISE is the same as working at the Windows PowerShell prompt. There is the Debug menu, but it ties back to the Windows PowerShell debugger itself. That is it for now.

Join me tomorrow when I will talk about more cool things to do with Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell Commands from One Session in Another Session

$
0
0

Summary: Learn how to automatically save commands from one session, and then use them in a new Windows PowerShell session.

Microsoft Scripting Guy, Ed Wilson, is here. One of the fun things about getting to travel around and talk to people who are using Windows PowerShell on a daily basis to automate their systems is the quality of the questions I receive. One question that has come up several times in the last month that the Scripting Wife and I have been traveling is, “How can I save my history when I exit Windows PowerShell, and then have that history available to me when I open up Windows PowerShell again?”

I gave this some thought…

I decided that it would be trivial to import automatically saved history when Windows PowerShell starts, and that the issue would be saving the Windows PowerShell history when Windows PowerShell exits. One student who was in my class in Irvine, California suggested that I automatically save each command into a history file as I type the commands. In this way, when I exit the Windows PowerShell console, I will already have an up-to-date history file. While this technique is not impossible, it could have an unintended performance hit, and I decided against it.

Another idea I had was to use the Windows PowerShell transcript. It is easy to start the transcript each time Windows PowerShell starts by adding the Start-Transcript command to the Windows PowerShell profile. I could then parse the Windows PowerShell transcript and pull out all of the commands. After I had all of the commands, it would be possible to add them to the history. But that would require creating a HistoryInfo object.

In the end, I decided to create two functions and add them to my profile. The first function imports a saved history.xml file into the current Windows PowerShell session. The second function exports the current history to a history.xml file, and then it exits Windows PowerShell.

The trick is to call the function that exports the history.xml file prior to exiting Windows PowerShell instead of clicking the “X” on the Windows PowerShell console, or typing exit to exit Windows PowerShell. As always, when I create a function, I also like to create an alias for that function. The two commands from my profile that create the aliases are shown here.

New-Alias -Name eps -Value Exit-PsWithHistory -description "mred alias"

New-Alias -Name ips -Value Import-PSHistory -Description "mred alias"

Instead of automatically importing the saved history, I manually type the alias for my import saved history function. The reason for this is that I do not always want to import my saved history; but, this is simply the way I work. I could easily modify my profile so that I do import my history automatically, and then I could simply clear the history if I did not want to use it. It will require a bit of testing before I make up my mind as to which action is most efficient.

Here is my function to export command history to a history.xml file and exit Windows PowerShell.

Function Exit-PsWithHistory

{

 If(!(Test-Path $PSHistory))

  {New-Item -path $PSHistory -itemtype directory}

  Get-History -count $MaximumHistoryCount | Export-Clixml -Path (Join-Path $PSHistory -child history.xml)

 Exit

} #end function Exit-PsWithHistory

The Exit-PsWithHistory function relies on the $PSHistory variable. This is a variable that I define in my Windows PowerShell profile, and it points to the folder I use to store my history.xml file. Here is the command that creates the $PSHistory variable.

$PSHistory = Join-path -path (split-path $PROFILE) -ChildPath history

So, I use the Test-Path cmdlet to see if there is a folder named History in my Windows PowerShell profile folder. If it does not exist, I create it by using the New-Item cmdlet.

If(!(Test-Path $PSHistory))

  {New-Item -path $PSHistory -itemtype directory}

I then use the Get-History cmdlet to get all of the items in my command history. By default, the Get-History cmdlet returns only 32 items from the history. If I want to get all of the commands in my command history, I have to specify a value for the Count parameter. The most logical thing to do is to use the $MaximumHistoryCount variable to specify this number. In this way, if I increase the maximum history count from the default value of 64 to another number, my function will always export all of the commands. I use the Export-CliXML cmdlet to export my command history into an .xml file, and I use Join-Path to create the path to my file. This command is shown here.

Get-History -count $MaximumHistoryCount | Export-Clixml -Path (Join-Path $PSHistory -child history.xml)

The last thing I do is call the exit command to exit Windows PowerShell.

The function to import my saved history.xml file appears here.

Function Import-PsHistory

{

 If(Test-Path $PSHistory)

 {

  Import-Clixml -Path (Join-Path -path $PSHistory -child history.xml) |

  Add-history

 }

} #end function import-psHistory

Once again, I use the Test-Path cmdlet to ensure that the history folder exists. If it does, I assume that a history.xml file exists. This is not a major problem, because the only reason the History folder would exist, would be if I had created it. If I did create it, it should have been when I was exporting a history.xml file.

If(Test-Path $PSHistory)

The next thing I do is import the xml file, and pipeline it to the Add-History cmdlet. Here is that portion of the function.

Import-Clixml -Path (Join-Path -path $PSHistory -child history.xml) |

  Add-history

I bump up my maximum history by assigning a new value to the variable. Rather than typing a big, long, complicated number, I simply use the kb administrative constant to allow me to create 2048 history entries. This command is shown here.

$MaximumHistoryCount = 2kb

You might wonder, "How large can I create the $maximumHistoryCount variable?" To determine the maximum allowed value, I use the Get-Variable cmdlet. This command is shown here.

Get-Variable MaximumAliasCount | select -ExpandProperty attributes

 

                 MinRange                                MaxRange TypeId

                 --------                                -------- ------

                     1024                                   32768 System.Management.Automation.Validat...

One thing to keep in mind, is when you use the Get-Variable cmdlet, do not include the dollar sign prefix. If I do include the dollar sign prefix of the variable, I obtain a rather cryptic error that states Windows PowerShell cannot find a variable with the name of 2048. Because I recognize that number as the value I had increased the $maximumhistorycount variable to, it makes sense. I then drop the dollar sign, and return the psvariable object. I then send the variable to the Format-List cmdlet, and I choose all of the properties. The result reveals that there is an object hiding in the attributes variable. I then pipe the psvariable object to the Select-Object cmdlet, and I use the ExpandProperty parameter to expand the object that is stored in the attributes property. These commands are shown here.

get-variable maximumhistorycount

get-variable maximumhistorycount | fl *

get-variable maximumhistorycount | select -ExpandProperty attributes

The commands and their associated output are shown here.

Image of command output

In the following image, I show my profile with the two new functions, the two aliases, and the two variable assignments.

Image of script

To exit Windows PowerShell, I use the eps alias (my alias for the Export-PsWithHistory function. When I start Windows PowerShell, my profile runs, and it loads the functions, aliases, and variables into memory. I then type the Import-PsHistory command (I can also use the ips alias). After I do that, I populate my history with all of my previous commands. I use the h (alias for Get-History) command to see what commands I now have available to me in my command history. This sequence of commands is shown in the image that follows.

Image of command output

There is one downside to this technique: Imported commands (via Add-History) do not populate the up and down arrows. But, dude (or dudette), with 2048 potential commands in the command history, that would be a ridiculous amount of Up and Down arrowing; that is why there are single letter aliases for Get-History and for Invoke-History.

Join me tomorrow for the Weekend Scripter when I will explore more coolness related to Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Find Out Who has Permissions to a Share

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to determine who has permissions to a shared folder.

Microsoft Scripting Guy, Ed Wilson, is here. It is finally the weekend here in Charlotte, North Carolina in the United States. It has seemed like a rather long week, in part due to several meetings, plus I took time out to record a show for TechNet Radio. Ironically, considering that we had two days off this week, the time has seemed compressed, and therefore, longer. Anyway, as I said earlier, it’s the weekend!

As someone who has written several thousand VBScript scripts in my lifetime, I do not consider it bad form to recycle some of that content when it comes time to create a new Windows PowerShell script. After all, when I leave the neighborhood of Windows PowerShell modules, snap-ins, cmdlets, and associated technology, an answer still may arise to a problem that involves Windows Management Instrumentation (WMI) or some other interface.

I was looking around to figure out a way to find out who has permissions to a particular shared folder on a remote server. Of course, I can target the Computer Management snap-in to a remote computer, but that is really slow. In fact, because the account I am logged on with does not have permissions to the remote server, it took me nearly 15 minutes to finally connect to the remote server (including several minutes of watching snap-ins initialize, the event log initialize, and a whole bunch of other hour glasses).

Surely, there has got to be a better way. Then it dawned on me…

I wrote a script to do this in the past…yes, long ago, I used to write VBScript code. Because I used Windows Search to index the full content of both VBS files and PS1 files, it was a simple matter to find the script I sought. The script I found is a VBScript file. I wrote the script on July 17, 2005 (as a matter of fact, I was in Montreal when I wrote the script).

Image of script

I have written several Hey, Scripting Guy! Blog posts that talk about migrating VBScript code to Windows PowerShell code. Here, I am not really talking about migrating VBScript code to Windows PowerShell code, but I am talking about taking the essential hard part of the VBScript code and using that in Windows PowerShell. In fact, the Associators Of WMI query is essentially the same (this is great news because there are numerous Hey, Scripting Guy! Blog posts that feature an Associators Of WMI query. In fact, I have one article in particular where I talk specifically about issues involved in migrating WMI queries to Windows PowerShell). It a bit difficult to translate the query because of the concatenation and the line continuation characters from the VBScript script.

The complete Get-ShareUsers.ps1 script is shown here. For ease of use and copying, I have uploaded this script to the Scripting Guys Script Repository.

Get-ShareUsers.ps1

$cred = Get-Credential -Credential iammred\administrator

$share = "data"

$cn = "hyperv1"

$query = "Associators of {win32_LogicalShareSecuritySetting='$share'}

 Where resultclass = win32_sid"

 Get-WmiObject -query $query -cn $cn -cred $cred |

 Select-Object -Property @{LABEL="User";EXPRESSION=

  {"{0}\{1}" -f $_.ReferencedDomainName, $_.AccountName}}, SID

The first thing I do is use the Get-Credential cmdlet to get the credentials to use to make the remote connection. I specify the user name and domain, but this is not a requirement in the script. You can have it prompt you and not supply any information to it by default. To do this, use Get-Credential with no parameters. The code for this is shown here.

$cred = Get-Credential

The credential dialog box is shown in the following image.

Image of dialog box

Next, I add two additional variables. The first one is the name of the share to retrieve security information about, and the second variable is the name of the remote computer. In the code that follows, I assign a value of data and a computer name of hyperv1 to the two variables. These two lines of code are shown here.

$share = "data"

$cn = "hyperv1"

Obviously, you will need to modify these two lines of code prior to using the script. An improvement to the script would be to prompt for the name of the share and the name of the computer. This would keep me from having to edit the script prior to running it.

Next, I have the Associators Of WMI query. In this query, I look for associations between the Win32_LogicalShareSecuritySetting WMI class and the Win32_Sid WMI class. Here is the code that performs this action.

$query = "Associators of {win32_LogicalShareSecuritySetting='$share'}

 Where resultclass = win32_sid"

It is now time to get the WMI information. To do this, I use the Get-WMIObject cmdlet. This command is shown here. (One thing to keep in mind is that alternate credentials cannot be supplied to a local WMI connection. This is a limitation of WMI, not Windows PowerShell.)

Get-WmiObject -query $query -cn $cn -cred $cred

I use the Select-Object cmdlet to retrieve three properties: the ReferencedDomainName, the AccountName, and the SID. I use a hash table to create a custom property called User that displays the user name and domain name in the form domainname\username. This portion of the script is shown here.

Select-Object -Property @{LABEL="User";EXPRESSION=

  {"{0}\{1}" -f $_.ReferencedDomainName, $_.AccountName}}, SID

When I run the script via the Windows PowerShell ISE, the user names and their associated SID appear in the output pane. This output is shown in the following image.

Image of Windows PowerShell ISE

To double check that the script works properly, I use the Computer Management tool, and I examine the share properties. This appears in the following image.

Image of properties

Well, that is about all there is to using Windows PowerShell to perform an Associators Of query to retrieve information about user’s access to a shared folder. Join me tomorrow as I modify this script a bit to make it more user friendly. Until then, have a great day.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Modify a PowerShell Script to Accept Piped Input

$
0
0

Summary: Learn how to modify a Windows PowerShell script and turn it into an advanced function that accepts piped input and has complete Help.

Microsoft Scripting Guy, Ed Wilson, is here. Some things end up being way more complicated than they might appear at first glance. Today’s script is a case in point. I decided I wanted to change the script from yesterday, and put it into a function to make it more portable.

I also decided to add a few other features. I ended up spending the entire day working on the script. Of course, there was a meeting (I had to record the TechNet Radio pod cast. You can find it on the Scripting with Windows PowerShell site, just below PowerShell Quiz). The complete script appears in the Script Center Script Repository.

The first thing I did was use the Function keyword—I specified a name and opened a pair of curly brackets (script block). I then added comment-based Help by using my Add-Help function from my way cool Windows PowerShell ISE profile. This portion of the script is shown here.

Function Get-ShareUsers

{

  <#

   .Synopsis

    This returns user name and sid of people with permission to a share

   .Description

    This function returns user name and sid of users with permission to

    a share. It works locally or remotely, and accepts alternate credentials

    for remote connections

   .Example

    "data" | Get-ShareUsers

    Returns information about who has access to the data share on the

    local computer

   .Example

    "data","shared" | Get-ShareUsers -cn hyperv1 -credential administrator

    Prompts for credentials of administrator on a remote server named hyperv1

    and returns users with permissions on the shared and the data shares

   .Example

    $shares = @()

    gwmi win32_share -cn hyperv1 -cred (Get-Credential) -Filter "type=0" |

    % { $shares += $_.name.tostring() }

    $shares | Get-ShareUsers -cn hyperv1 -cred administrator

    This example queries WMI to create an array of share names on a remote server

    It then pipelines that array to the Get-ShareUsers function where it connects

    to a remote server named hyperv1 using administrator credentials

   .Parameter Credential

    The user name to use for Get-Credential when connecting remotely

   .Parameter Share

    The name of the share to return information about

    .Parameter cn

     The name of the remote computer.

   .Notes

    NAME:  Get-ShareUsers

    AUTHOR: ed wilson, msft

    LASTEDIT: 11/22/2011 18:04:10

    KEYWORDS: Windows PowerShell, Scripting Guy!, Weekend Scripter, storage,

    shared folders and mapped drives, security

    HSG: WES-11-27-11

   .Link

     Http://www.ScriptingGuys.com

 #Requires -Version 2.0

 #>

Next, I specify that I want to use CmdletBinding. I also make the Share parameter mandatory, and I set it up to accept piped input. The CmdletBinding and the parameter section of the script are shown here.

[CmdletBinding()]

 Param (

  [string]$credential,

  [Parameter(Mandatory = $true,Position = 0,valueFromPipeline=$true)]

  [string]$share,

  [string]$cn = $env:COMPUTERNAME

 )

In the Begin portion of my function, I setup my splatting. I create an empty hash table named RemoteParam, and then I add the Credential and the ComputerName values to the hash table. This portion of the script is shown here.

BEGIN

 {

  $remoteParam = @{}

  if($credential) { $remoteParam.add( "Credential", (Get-Credential $credential))

                   $remoteParam.Add( "Computername", $cn) }

 }

I then create the Process portion of the function. I use the automatic variable $input to display the current item that is piped to the function. I then use the $query from yesterday’s script. This portion of the Process block is shown here.

PROCESS {

   $input

   $query = "Associators of {win32_LogicalShareSecuritySetting='$share'}

   Where resultclass = win32_sid"

I modify the Get-WmiObject command to use splatting to accept the ComputerName and Credential parameters. I only pass these if the Credential parameter appears on the command line. Here is the modified Get-WmiObject command.

Get-WmiObject -query $query @RemoteParam

The remainder of the function is the same as yesterday’s script; and therefore, I will not go over it. It is shown here, for the sake of completeness.

Select-Object -Property @{LABEL="User";EXPRESSION=

    {"{0}\{1}" -f $_.ReferencedDomainName, $_.AccountName}}, SID

   }

} #end function Get-ShareUsers

To use the Get-ShareUsers function, I pipe a shared folder name to it. Here is an example command.

"Data" | Get-ShareUsers -credential administrator -cn hyperv1

The command and its associated output are shown in the following image.

Image of command output

I can pipe an array of share names to the function. This is shown here.

"shared","data" | Get-ShareUsers -credential administrator -cn hyperv1

The command and its associated output are shown in the following image.

Image of command output

One of the cool things to do with this function, is to use WMI to create an array of share names. Here is some code that does that.

$shares = @()

gwmi win32_share -cn hyperv1 -cred $cred -Filter "type=0" |

% { $shares += $_.name.tostring() }

When I have the shares in an array, I can pipe the array to the Get-ShareUsers function. The syntax to do this is shown here.

$shares | Get-ShareUsers -credential administrator -cn hyperv1

The commands and the associated output are shown in the following image.

Image of command output

Well, this is about all for messing around with shares and who has permissions to them. Remember, the complete script is in the Scripting Guys Script Repository.

If you are going to be in Pittsburg, Pennsylvania on December 13, 2011, you should check out the Pittsburgh PowerShell Users Group meeting. The Scripting Wife and I will be there, and I will be speaking about Windows PowerShell Best Practices. It will be awesome!

Join me tomorrow as I begin a new week on the Hey, Scripting Guy! Blog. Oh, by the way, have you noticed that now we have more Windows PowerShell articles on the Hey, Scripting Guy! Blog than VBScript articles? This is cool!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Four Easy Ways to Import CSV Files to SQL Server with PowerShell

$
0
0

Summary: Learn four easy ways to use Windows PowerShell to import CSV files into SQL Server.

Microsoft Scripting Guy, Ed Wilson, is here.  I was chatting this week with Microsoft PowerShell MVP, Chad Miller, about the series of blogs I recently wrote about using CSV files. He thought a helpful addition to the posts would be to talk about importing CSV files into a SQL Server. I most heartily agreed. Welcome to Guest Blogger Week. We will start off the week with a bang-up article by Chad Miller. Chad has previously written guest blogs for the Hey, Scripting Guy! Blog. Here is a little information about Chad:

Chad Miller is a SQL Server database admin and the senior manager of database administration at Raymond James Financial. In his spare time, he is the project coordinator and developer of the CodePlex project SQL Server PowerShell Extensions (SQLPSX). Chad leads the Tampa Windows PowerShell User Group, and he is a frequent speaker at SQL Saturdays and Code Camps.

Contact information:
Blog: Sev17
Twitter: cmille19

Importing CSV files into SQL Server

Windows PowerShell has built in support for creating CSV files by using the Export-CSV cmdlet. However, the creation of a CSV file is usually only a short stop in an overall process that includes loading the file into another system. In this post, we'll look at a few scripted-based approaches to import CSV data into SQL Server. Note: SQL Server includes a component specifically for data migration called SQL Server Integration Services (SSIS), which is beyond the scope of this article.

T-SQL BULK INSERT command

The T-SQL BULK INSERT command is of the easiest ways to import CSV files into SQL Server. The BULK INSERT command requires a few arguments to describe the layout of the CSV file and the location of file. Let's look at an example of creating a CSV file by using Export-CSV, and then importing the information into a SQL Server table by using BULK INSERT.

Requirements

  • Sysadmin or insert and bulkadmin to SQL Server
  • Local access to SQL Server

Setup

1. Download the following script: Invoke-SqlCmd2.ps1

2. Create a table disk space by copying the following code in SQL Server Management Studio. 

Note: The example uses a database named "hsg."

CREATE TABLE dbo.diskspace(

UsageDate datetime,

SystemName varchar(50),

Label varchar(50),

VolumeName varchar(50),

Size decimal(6,2),

Free decimal(6,2),

PrecentFree decimal(5,2)

)

The following image shows the command in SQL Server Management Studio.

Image of query

3. Save the following script as Get-DiskSpaceUsage.ps1, which will be used as the demonstration script later in this post.

param($ComputerName=".")

Get-WmiObject -computername "$computername" Win32_Volume -filter "DriveType=3" | foreach {

new-object PSObject -property @{

UsageDate = $((Get-Date).ToString("yyyy-MM-dd"))

SystemName = $_.SystemName

Label = $_.Label

VolumeName = $_.Name

Size = $([math]::round(($_.Capacity/1GB),2))

Free = $([math]::round(($_.FreeSpace/1GB),2))

PercentFree = $([math]::round((([float]$_.FreeSpace/[float]$_.Capacity) * 100),2))

}

} | Select UsageDate, SystemName, Label, VolumeName, Size, Free, PercentFree

Now we will use the script Get-DiskSpaceUsage.ps1 that I presented earlier. It lists information about disk space, and it stores the information in a CSV file.

./get-diskusage.ps1 | export-csv -Path "C:\Users\Public\diskspace.csv" -NoTypeInformation

The generated CSV file shows that Export-CSV includes a text delimiter of double quotes around each field:

"UsageDate","SystemName","Label","VolumeName","Size","Free","PercentFree"

"2011-11-20","WIN7BOOT","RUNCORE SSD","D:\","59.62","31.56","52.93"

"2011-11-20","WIN7BOOT","DATA","E:\","297.99","34.88","11.7"

"2011-11-20","WIN7BOOT",,"C:\","48","6.32","13.17"

"2011-11-20","WIN7BOOT","HP_TOOLS","F:\","0.1","0.09","96.55"

Although many programs handle CSV files with text delimiters (including SSIS, Excel, and Access), BULK INSERT does not. To use BULK INSERT without a lot of work, we'll need to remove the double quotes. We can use a quick and dirty way of simply replacing all the quotes in the CSV file. In the blog post Remove Unwanted Quotation Marks from CSV Files by Using PowerShell, the Scripting Guys explains how to remove double quotes. This method can be used for circumstances where you know it won't cause problems. How do you know? Well, the data being generated from our Get-DiskspaceUsage should never have double quotes or commas in the data. So here's the code to remove the double quotes:

(Get-Content C:\Users\Public\diskspace.csv) | foreach {$_ -replace '"'} | Set-Content C:\Users\Public\diskspace.csv

UsageDate,SystemName,Label,VolumeName,Size,Free,PercentFree

2011-11-20,WIN7BOOT,RUNCORE SSD,D:\,59.62,31.56,52.93

2011-11-20,WIN7BOOT,DATA,E:\,297.99,34.88,11.7

2011-11-20,WIN7BOOT,,C:\,48,6.32,13.17

2011-11-20,WIN7BOOT,HP_TOOLS,F:\,0.1,0.09,96.55

Now we are ready to import the CSV file as follows:

. .\Invoke-SqlCmd2.ps1

 

$query = @"

BULK INSERT hsg.dbo.diskspace FROM 'C:\Users\Public\diskspace.csv'

WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')

"@

 

Invoke-SqlCmd2 -ServerInstance "$env:computername\sql1" -Database hsg -Query $query

The following data shows that our CSV file was successfully imported.

UsageDate

System
Name

Label

Volume
Name

Size

Free

Percent
Free

11/20/2011 12:00:00 AM

WIN7BOOT

RUNCORE SSD

D:\

59.62

31.56

52.93

11/20/2011 12:00:00 AM

WIN7BOOT

DATA

E:\

297.99

34.88

11.70

11/20/2011 12:00:00 AM

WIN7BOOT

C:\

48.00

6.32

13.17

11/20/2011 12:00:00 AM

WIN7BOOT

HP_TOOLS

F:\

0.10

0.09

96.55

BULK INSERT works reasonably well, and it is very simple. However, there are some drawbacks, including:

  • You need elevated permissions on SQL Server.
  • BULK INSERT doesn't easily understand text delimiters.
  • Using the UNC path to files requires an additional setup, as documented under Permissions on the BULK INSERT site.

For these reasons, let's look at some alternate approaches.

Before there was Windows PowerShell, there was LogParser

LogParser is a command-line tool and scripting component that was originally released by Microsoft in the IIS 6.0 Resource Kit. LogParser provides query access to different text-based files and output capability to various data sources including SQL Server. Even though this little tool hasn't been updated since 2005, it has some nice features for loading CSV files into SQL Server.

Setup

Download and install LogParser 2.2.

LogParser can do a few things that we couldn't easily do by using BULK INSERT, including:

  • Automatically create a table based on the CSV layout
  • Handle the text delimiter of double quotes

Note: CSV files do not need to be local.

Using LogParser

You can use the LogParser command-line tool or a COM-based scripting interface. Let's look at examples of both.

LogParser command-line tool

LogParser requires some special handling, which is why we use Start-Process. Some switches and arguments are difficult to work with when running directly in Windows PowerShell. Also Windows Powershell_ISE will not display output from LogParser that are run via the command-line tool. Here is the syntax for running a command to generate and load a CSV file:

./get-diskspaceusage.ps1 | export-csv -Path "C:\Users\Public\diskspace.csv" -NoTypeInformation -Force

#Uncomment/comment set-alias for x86 vs. x64 system

#set-alias logparser "C:\Program Files\Log Parser 2.2\LogParser.exe"

set-alias logparser "C:\Program Files (x86)\Log Parser 2.2\LogParser.exe"

start-process -NoNewWindow -FilePath logparser -ArgumentList @"

"SELECT * INTO diskspaceLP FROM C:\Users\Public\diskspace.csv" -i:CSV -o:SQL -server:"Win7boot\sql1" -database:hsg -driver:"SQL Server" -createTable:ON

"@

Looking at SQL Server, we see that our newly created table contains the CSV file:

Filename

Row
Number

Usage
Date

System
Name

Label

Volume
Name

Size

Free

Percent
Free

C:\Users\Public\diskspace.csv

2

2011-11-20

WIN7BOOT

RUNCORE SSD

D:\

59.62

31.56

52.93

C:\Users\Public\diskspace.csv

3

2011-11-20

WIN7BOOT

DATA

E:\

297.99

34.88

11.7

C:\Users\Public\diskspace.csv

4

2011-11-20

WIN7BOOT

C:\

48

6.32

13.16

C:\Users\Public\diskspace.csv

5

2011-11-20

WIN7BOOT

HP_TOOLS

F:\

0.1

0.09

96.55

The CreateTable switch will create the table if it does not exist; and if it does exist, it will simply append the rows to the existing table. Also notice that we got two new columns: Filename and Row Number, which could come in handy if we are loading a lot of CSV files. You can eliminate the Filename and Row Number columns by specifying the column list in the Select statement as we'll see in a moment.

LogParser COM scripting

Using the COM-based approach to LogParser is an alternative method to using the command line. Although the COM-based approach is a little more verbose, you don't have to worry about wrapping the execution in the Start-Process cmdlet. The COM-based approach also handles the issue with Windows Powershell ISE. Here is code to work with the COM object:

$logQuery = new-object -ComObject "MSUtil.LogQuery"

$inputFormat = new-object -comobject "MSUtil.LogQuery.CSVInputFormat"

$outputFormat = new-object -comobject "MSUtil.LogQuery.SQLOutputFormat"

$outputFormat.server = "Win7boot\sql1"

$outputFormat.database = "hsg"

$outputFormat.driver = "SQL Server"

$outputFormat.createTable = $true

$query = "SELECT UsageDate, SystemName, Label, VolumeName, Size, Free, PercentFree INTO diskspaceLPCOM FROM C:\Users\Public\diskspace.csv"

$null = $logQuery.ExecuteBatch($query,$inputFormat,$outputFormat)

The main drawback to using LogParser is that it requires, well…installing LogParser. For this reason, let's look at one more approach.

Use Windows PowerShell to collect server data and write to SQL Server

In my previous Hey, Scripting Guy! post, Use PowerShell to Collect Server Data and Write to SQL, I demonstrated some utility functions for loading any Windows PowerShell data into SQL Server. Let's revisit this solution using the CSV file example:

Setup

Download the following scripts:

Run the following code to create a CSV file, convert to a data table, create a table in SQL Server, and load the data:

. .\out-datatable.ps1

. .\Add-SqlTable.ps1

. .\write-datatable.ps1

. .\Invoke-SqlCmd2.ps1

$dt = .\Get-DiskSpaceUsage.ps1 | Out-DataTable

Add-SqlTable -ServerInstance "Win7boot\Sql1" -Database "hsg" -TableName diskspaceFunc -DataTable $dt

Write-DataTable -ServerInstance "Win7boot\Sql1" -Database "hsg" -TableName "diskspaceFunc" -Data $dt

invoke-sqlcmd2 -ServerInstance "Win7boot\Sql1" -Database "hsg" -Query "SELECT * FROM diskspaceFunc" | Out-GridView

The following image shows the resulting table in Grid view.

Image of table

The observant reader will notice that I didn't write the information to a CSV file. Instead, I created an in-memory data table that is stored in my $dt variable. This is because by using this approach, there was not a need to create a CSV file, but for completeness let's apply the solution to our CSV loading use case:

. .\out-datatable.ps1

. .\Add-SqlTable.ps1

. .\write-datatable.ps1

. .\Invoke-SqlCmd2.ps1

./get-diskspaceusage.ps1 | export-csv -Path "C:\Users\Public\diskspace.csv" -NoTypeInformation -Force

$dt = Import-Csv -Path "C:\Users\Public\diskspace.csv" | Out-DataTable

Add-SqlTable -ServerInstance "Win7boot\Sql1" -Database "hsg" -TableName diskspaceFunc -DataTable $dt

Write-DataTable -ServerInstance "Win7boot\Sql1" -Database "hsg" -TableName "diskspaceFunc" -Data $dt

This post demonstrated three approaches to loading CSV files into tables in SQL Server by using a scripted approach. The approaches range from using the very simple T-SQL BULK INSERT command, to using LogParser, to using a Windows PowerShell function-based approach.

Thank you, Chad, for sharing this information with us. It looks like your last four scripts have the makings of an awesome NetAdminCSV module.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

Use a Free PowerShell Snap-in to Easily Manage App-V Server

$
0
0

Summary: Windows PowerShell Microsoft MVP, Sherif Talaat, teaches how to manage App-V Server with a free Windows PowerShell snap-in.

Microsoft Scripting Guy, Ed Wilson, is here. Guest Blogger Week continues with a special treat. We have Windows PowerShell Microsoft MVP, Sherif Talaat, with us today. Here is a little bit about him.

Photo of Sherif Talaat

Sherif Talaat is an IT Pro with 6+ years’ experience in various Microsoft technologies. He specialized in virtualization, from desktops to datacenters. Sherif holds several Microsoft certifications, and he has been a Microsoft Most Valuable Professional (MVP) for Windows PowerShell since 2009. Sherif used to write about Windows PowerShell in Arabic on his blog The Arabian PowerShell. He is currently working on developing Windows PowerShell scripts for the Microsoft App-V Server SnapIn.

Contact information:
Twitter: @SherifTalaat

 

I totally believe that the faster you adopt and learn Windows PowerShell, the faster you adopt the new technologies and become a guru in your job. Believe it or not, we are almost in 2012, and Windows PowerShell is booming and becoming a MUST in Microsoft and non-Microsoft products. Honestly, I’m not surprised at all, not only because Windows PowerShell is part of the Microsoft Common Engineering Criteria (CEC), but also because it’s a very powerful automation engine and it is easy to use.

I have to admit that I wasn’t able to deal with SharePoint before SharePoint 2010. Do you know why?! Yes, you’re right…because there was no Windows PowerShell in the earlier versions. Today, Windows PowerShell is a master key for most technology doors, and it is the tool behind a successful system administrator.

Unfortunately, there are a few products that still have no Windows PowerShell cmdlets, which is a problem for Windows PowerShell lovers. We as a community are trying to fill this gap by developing custom cmdlets, modules, and snap-ins to provide you with the Windows PowerShell commands that you want for those products.

Microsoft Application Virtualization (App-V) Server is one of the products that shipped without Windows PowerShell (or even an SDK or APIs) to provide a way to automate the complex administrative tasks.

What is App-V?

For those of you who do not know what App-V is…

App-V is another type of virtualization technology at the application level. Simply, you host the application on the server and then publish it to client computer desktops. It is used for faster application deployment and maintenance and also to solve application conflict issues.

Getting started with the WindowsPowerShell snap-in for App-V

A couple of problems with App-V Server are that it requires too many steps to do very simple tasks, and the error messages are very generic. So you can spend a lot of time repeating steps until you fix your problems, without knowing what the root cause was.

The Microsoft App-V Server SnapIn is a CodePlex project that provides a set of Windows PowerShell cmdlets that enable IT admins to easily manage and automate complex tasks in App-V Server. The current release of this snap-in contains more than 20 cmdlets. These cmdlets cover around 80% of GUI wizards and tasks. The added plus is that you get more details on what is happening in the background.

App-V snap-in cmdlets

App-V cmdlets are categorized to be similar (as much as possible) to the App-V Console. Each cmdlet category is shown in the left pane, and each cmdlet is reflected as an action in the right pane.

Image of cmdlets

Here is a table of cmdlets that illustrates this arrangement.

Category

Cmdlets

System Options

  Get-AppVSystemOptions

  Set-AppVSystemOptions

Packages

  Get-AppVPackages

  New-AppVPackage

  Remove-AppVPackage

Application Groups

  Get-AppVApplicationGroup

  New-AppVApplicationGroup

  Remove-AppVApplicationGroup

Applications

  Get-AppVApplications

  New-AppVApplication

  Remove-AppVApplication

  Set-AppVApplicationPublishingSettings

Administrators

  Get-AppVAdministrators

  New-AppVAdministrator

  Remove-AppVAdministrator

Server Groups

  Get-AppVServerGroup

  New-AppVServerGroup

  Remove-AppVServerGroup

Servers

  Get-AppVServers

  New-AppVServer

  Remove-AppVServer

Providers

  Get-AppVProviders

  Remove-AppVProvider

Now let’s look at a two examples that use the AppV cmdlets.

Example 1

To use the AppV snap-in to configure the Default Content Path shared folder:

Set-AppVSystemOptions -DefaultContentPath \\AppVServer\ContentFolder\

Example 2

To use the App-V snap-in to publish Adobe Reader X to specific a user group that is using App-V Server, you can follow these steps:

  • Create App-V package
  • Create App-V application
  • Configure App-V application access
  • Configure App-V application publishing settings

This script is shown here.

New-AppVPackage -UNCpath "\\APPVSERVERNAME \Content\Adobe Reader X"

$PackageID = (Get-AppVPackages -Name *Reader*).Package_ID

$ServerGroupID = (Get-AppVServerGroup -Name *Default*).ID

New-AppVApplication -UNCpath "\\APPVSERVERNAME\Content\Adobe Reader X" -OSDfile "Adobe Reader X 10.1.0.534.osd" -Package_ID $PackageId -ServerGroupID $ServerGroupID -AccessGroups "$env:USERDOMAIN\AppVUsers"

Set-AppVApplicationPublishingSettings -AppName "Adobe Reader X" -Desktop $true -StartMenu $true

Thank you, Sherif. This is great information and an excellent introduction to a very powerful add-on to App-V. I can’t wait to download it and to start playing with the snap-in. Most excellent!

Guest Blogger Week will continue tomorrow when we will have Ken McFerron, who is the president of the Pittsburgh, Pennsylvania PowerShell Users Group. The Pittsburgh PowerShell Users Group will have their first meeting on December 13, 2011, and the Scripting Wife and I will be there. I will be making a presentation. It will be lots of fun, and I hope to see you there.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

Use PowerShell to Find and Remove Inactive Active Directory Users

$
0
0

Summary: Guest blogger, Ken McFerron, discusses how to use Windows PowerShell to find and to disable or remove inactive Active Directory users.

Microsoft Scripting Guy, Ed Wilson, is here. One of the highlights of our trip to Canada, was—well, there were lots of highlights—but one of the highlights was coming through Pittsburgh and having dinner with Ken and his wife. When the Scripting Wife and I first met Ken in person (at the Windows PowerShell deep dive in Vegas), we were impressed with Ken's knowledge and enthusiasm (although the Scripting Wife already knew Ken from the PowerScripting Podcast chat room this was the first time I had met him). We later had a chance to see him at Atlanta TechStravaganza 2011. He is the founder of the Pittsburgh PowerShell Users group (I am speaking in person at their first meeting on December 13, 2011), and he is extremely passionate about Windows PowerShell. Here is what Ken has to say about himself.

My name is Ken McFerron. I currently work as a senior system administrator, and I focus on Active Directory. I have been in the IT field since 1999, and I started using VBScript and Batch scripting shortly after. I have always enjoyed trying to automate as much as I can with my scripts. I was introduced to Windows PowerShell around 2008, and I have been trying to learn as much as I can about it since then. I use Windows PowerShell on a daily basis now, and I dread going back to troubleshoot or update old VBScript scripts—these usually end up getting converted to Windows PowerShell. I have been working on getting a Windows PowerShell users group started in the Pittsburgh area. On December 13, we will be having our first meeting. I cannot wait to get the group started and start sharing and learning more about Windows PowerShell with others in the area.

One big problem for companies that do not utilize an identity management system (such as Forefront Identiy Manager 2010) is stale user accounts. I have seen companies that have thousands of accounts for users who have not logged into the domain in years, or at all. With Windows PowerShell and the Microsoft Active Directory (AD) module, the task of identifying and deleting these accounts is an easy one.

First we need to determine what we need to look for. Beginning with Active Directory in Windows Server 2003, there is an attribute called LastLogonTimeStamp, which is replicated between domain controllers every 9 to 14 days. The AD module also displays this attribute in an easy-to-read format called LastLogonDate. There are some instances when this attribute is not updated, so I also like to look at PasswordLastSet.

So the first step is to query AD to find all the enabled accounts that have the attributes LastLogonTimeStamp and PasswordLastSet that are over 90 days old. Any users that have not logged on will not have a value for LastLogonDate. One way to do this is to use the Get-ADUser cmdlet, and then pipe the results to Where-Object to do the filtering as follows:

get-aduser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -filter * -Properties lastlogondate, passwordlastset | Where-Object {($_.lastlogondate -le $90days -or $_.lastlogondate -notlike "*")-AND ($_.passwordlastset -le $90days) -AND ($_.Enabled -eq $True)} | Select-Object name, lastlogondate, passwordlastset

Doing it this way will work, but it is not the most efficient. By running Measure-Command on my virtual machine, you can see how long this took to complete for about 10,000 users.

A better way to filter the users would be to remove the pipe to Where-Object, and use the following filter:

$90Days = (get-date).adddays(-90)

Get-ADUser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -filter {(lastlogondate -notlike "*" -OR lastlogondate -le $90days) -AND (passwordlastset -le $90days) -AND (enabled -eq $True)} -Properties lastlogondate, passwordlastset | Select-Object name, lastlogondate, passwordlastset

If we run Measure-Command again, we can see that the time has really decreased.

Now that we have a list of all the user accounts, we need to determine what to do with them. I like to disable the accounts first before I delete them. If you find that one of these accounts is needed, it is much easier to enable the account than to restore it. Some administrators like to move all of these user accounts to a separate OU, and disable all the accounts for X number of days before they delete them. This will work most of the time. But I do not like doing it because you can run into some issues. For example, you could run into people who have the same name. You cannot have identical distinguished names in AD, so if you try to move one, you will get and error message like this:

So I like to leave the accounts in place and update an attribute with the date that they were disabled. To keep it simple, I will use the Description attribute. When we determine how long to keep these accounts disabled, we can read this attribute and then delete any accounts that have been disabled for X number of days. To update the description attribute we would use the Set-ADUser cmdlet as follows:

Get-ADUser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -filter {lastlogondate -le $90days -AND passwordlastset -le $90days} -Properties lastlogondate, passwordlastset | set-aduser -Description (get-date).toshortdatestring())

This will update the description, but not disable the account. So we need to disable the account as well. We can use the PassThru switch to update the description and disable each account.

Get-ADUser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -filter {lastlogondate -le $90days -AND passwordlastset -le $90days} -Properties lastlogondate, passwordlastset | set-aduser -Description ((get-date).toshortdatestring()) –passthru | Disable-ADAccount

Now that we have all the accounts disabled, we need to delete them. We can use the Remove-ADObject cmdlet to delete the account, and then use Get-ADUser to read the Description attribute. To compare the date that the account was disabled to the current date, we can use Where-Object, as shown here:

$14days = (get-date).adddays(-14)

Get-Aduser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -Filter {enabled -eq $False} -properties description | Where { (get-date $_.Description) -le $14Days} | remove-adobject

Be very careful with this. The command that I have provided will prompt for every user before deleting the accounts. To get a list, you can use WhatIf, or if you do not want to get prompted, you can use Confirm:$False, as shown here:

Get-Aduser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -Filter {enabled -eq $False} -properties description | Where { (get-date $_.Description) -le $14Days} | remove-adobject –whatif

Get-Aduser -SearchBase "OU=User_Accounts,DC=DEVLAB,DC=LOCAL" -Filter {enabled -eq $False} -properties description | Where { (get-date $_.Description) -le $14Days} | remove-adobject –confirm:$False

In summary, we opened this post with a couple one liners that can disable accounts for users who have not logged on or changed their passwords in the last 90 days. We just created a couple of additional one liners to delete disabled accounts after 14 days. Now we can put everything together into a single script. I added a bit of code to handle common error conditions and to log accounts that are deleted and disabled, but the essential script is the four one liners that we examined earlier. Here is the complete script:

#import the ActiveDirectory Module

Import-Module ActiveDirectory

#Create a variable for the date stamp in the log file

$LogDate = get-date -f yyyyMMddhhmm

#Sets the OU to do the base search for all user accounts, change for your env.

$SearchBase = "OU=User_Accounts,DC=DEVLAB,DC=LOCAL"

#Create an empty array for the log file

$LogArray = @()

#Sets the number of days to delete user accounts based on value in description field

$Disabledage = (get-date).adddays(-14)

#Sets the number of days to disable user accounts based on lastlogontimestamp and pwdlastset.

$PasswordAge = (Get-Date).adddays(-90)

#RegEx pattern to verify date format in user description field.

$RegEx = '^(0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])[- /.](20)\d\d$'

#Use ForEach to loop through all users with description date older than date set. Deletes the accounts and adds to log array.

ForEach ($DeletedUser in (Get-Aduser -searchbase $SearchBase -Filter {enabled -eq $False} -properties description ) ){

  #Verifies description field is in the correct date format by matching the regular expression from above to prevent errors with other disbaled users.

  If ($DeletedUser.Description -match $Regex){

    #Compares date in the description field to the DisabledAge set.

    If((get-date $DeletedUser.Description) -le $Disabledage){

      #Deletes the user object. This will prompt for each user. To suppress the prompt add "-confirm:$False". To log only add "-whatif".

      Remove-ADObject $DeletedUser

        #Create new object for logging

        $obj = New-Object PSObject

        $obj | Add-Member -MemberType NoteProperty -Name "Name" -Value $DeletedUser.name

        $obj | Add-Member -MemberType NoteProperty -Name "samAccountName" -Value $DeletedUser.samaccountname

        $obj | Add-Member -MemberType NoteProperty -Name "DistinguishedName" -Value $DeletedUser.DistinguishedName

        $obj | Add-Member -MemberType NoteProperty -Name "Status" -Value 'Deleted'

        #Adds object to the log array

        $LogArray += $obj

    }

  }

}

#Use ForEach to loop through all users with pwdlastset and lastlogontimestamp greater than date set. Also added users with no lastlogon date set. Disables the accounts and adds to log array.

ForEach ($DisabledUser in (Get-ADUser -searchbase $SearchBase -filter {((lastlogondate -notlike "*") -OR (lastlogondate -le $Passwordage)) -AND (passwordlastset -le $Passwordage) -AND (enabled -eq $True)} )) {

  #Sets the user objects description attribute to a date stamp. Example "11/13/2011"

  set-aduser $DisabledUser -Description ((get-date).toshortdatestring())

  #Disabled user object. To log only add "-whatif"

  Disable-ADAccount $DisabledUser

    #Create new object for logging

    $obj = New-Object PSObject

    $obj | Add-Member -MemberType NoteProperty -Name "Name" -Value $DisabledUser.name

    $obj | Add-Member -MemberType NoteProperty -Name "samAccountName" -Value $DisabledUser.samaccountname

    $obj | Add-Member -MemberType NoteProperty -Name "DistinguishedName" -Value $DisabledUser.DistinguishedName

    $obj | Add-Member -MemberType NoteProperty -Name "Status" -Value 'Disabled'

    #Adds object to the log array

    $LogArray += $obj

}

#Exports log array to CSV file in the temp directory with a date and time stamp in the file name.

$logArray | Export-Csv "C:\Temp\User_Report_$logDate.csv" -NoTypeInformation

Guest Blogger Week will continue tomorrow when Josh Gavant will talk about using SharePoint Web Services with Windows PowerShell to query for search results.

Thank you, Ken, and see you in a couple of weeks for the Pittsburgh PowerShell Users Group meeting.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Use SharePoint Web Services and PowerShell to Work with Search

$
0
0

Summary: Learn how to use the web services in SharePoint with Windows PowerShell to query for search results.

Microsoft Scripting Guy, Ed Wilson, is here. Today our Guest Blogger Week continues with an excellent post about using Windows PowerShell cmdlets with SharePoint. The post is written by a return guest, Josh Gavant. For those of you who might not recall from his previous posts, here is a little bit about Josh:

Josh is a premier field engineer (PFE) with Microsoft Services, and he specializes in SharePoint and Windows PowerShell. When he is not playing with computers, he enjoys music and running with his wonderful wife in beautiful Chicago.

Contact information:
Blog: Beside the Point
Twitter: @joshugav

Note: The script for today’s blog is posted in the Scripting Guys Script Repository.

Take it away Josh!

Many SharePoint services can be accessed directly by using web services calls. An advantage of utilizing web services instead of the SharePoint object model is that service calls work neatly and predictably from clients, with no need for any additional DLLs or installations. Customers who are building custom applications on top of SharePoint often take advantage of the web services in SharePoint to layer custom interfaces and services on top of SharePoint. We can also take advantage of these services via Windows PowerShell as an aid in testing and as a component of proper scripts.

To access the web services in SharePoint, we will utilize the New-WebServiceProxy cmdlet in Windows PowerShell, which automatically retrieves the Web Service Description Language (WSDL) document for a web service and dynamically builds a proxy that is able to connect to that service. Like other object frameworks (such as COM, .NET, and WMI), Windows PowerShell does us the favor of abstracting the differences between different types of objects. It then surfaces a web service in the same manner as other Windows PowerShell objects, with members like any other Windows PowerShell member.

Of the several SharePoint front-end web services that are available, we will focus on retrieving search results through QueryService. First, we’ll complete the simple task of building a proxy object. Then we’ll build up the necessary elements to send off a query through our proxy. Finally, we’ll wrap things up in a function to make life easier.

The only trick to remember when calling a SharePoint web service is that you need to authenticate yourself. So use the following paradigm to create a web service proxy for the Search service (substitute your value for $WebApplicationPath):

$WebApplicationPath = “<Path_To_WebApplication>”

$SearchPath = “/_vti_bin/Search.asmx”

$SearchWS = New-WebServiceProxy -Uri ($WebApplicationPath + $SearchPath) -UseDefaultCredential

Part of the purpose of creating the proxy in Windows PowerShell is to explore the interfaces that it offers. In the spirit of exploration, run the following commands:

$SearchWS | Get-Member

$SearchWS.Query

Note that the second command returns a MethodInfo object with information about the method to be called. Two methods of interest are returned by Get-Member from the Search service proxy: Query and QueryEx. Both take an XML document describing the query, but they differ in the results they return. Query returns results in XML form, and QueryEx returns results as an ADO.NET dataset. Both can be treated as first class objects in Windows PowerShell, but I like using ADO.NET better in Windows PowerShell, so we will use QueryEx.

Now we must build our QueryXML. When we have completed building our Query XML, we will call the QueryEx method. You can find the Microsoft.Search.Query schema on MSDN. We will use this form, relying on defaults for some excluded nodes as shown here:

$KeywordQuery = “Test SharePoint”

$Count = 10

$QueryXml = @"

<QueryPacket xmlns="urn:Microsoft.Search.Query" >

    <Query>

        <Context>

            <QueryText type="STRING">$KeywordQuery</QueryText>

        </Context>

        <Range>

            <Count>$Count</Count>

        </Range>

        <IncludeSpecialTermResults>false</IncludeSpecialTermResults>

        <PreQuerySuggestions>false</PreQuerySuggestions>

        <HighlightQuerySuggestions>false</HighlightQuerySuggestions>

        <IncludeRelevantResults>true</IncludeRelevantResults>

        <IncludeHighConfidenceResults>false</IncludeHighConfidenceResults>

    </Query>

</QueryPacket>

"@

Note the use of an expandable here-string. This gives me the best of both worlds—between the opening and closing lines, I need not worry about quotation marks, and yet I can use variables to specify values in the text. Originally, I wrote this script by using [xml] objects, but then I decided that using a string of XML would be easier.

In this case, I declared values for $KeywordQuery and $Count right before setting $QueryXml. When I wrap this into a function, $KeywordQuery and $Count will be the function’s parameters.

With my setup work out of the way, I’m ready to call the proxy’s method with my XML. The return from this method is an ADO.NET dataset, which is a collection of data tables. I have written the Query XML in such a way that there is only one table (RelevantResults) in this dataset; if it is written in other ways, there could be two or three tables. To be on the safe side, I retrieve the RelevantResults table from the set. Windows PowerShell will automatically enumerate through each data row in the data table and create an object that is based on the values of columns in the row. Here are the relevant commands:

$Results = $SearchWS.QueryEx( $QueryXml )

$Results.Tables[“RelevantResults”]

The default output contains a number of properties, some of which may not be relevant to you. A nice set of properties to start with is returned by the following command:

$Results.Tables[“RelevantResults”] | Format-Table Title, Author, ContentClass, Path

If you wanted to retrieve different properties from the search engine, you could modify the XML to specify any managed property in SharePoint Search (use Get-SPEnterpriseSearchMetadataManagedProperty to get a list). I will leave that as an exercise for you.

We have now presented all the steps to query SharePoint search via Windows PowerShell. Let us wrap it into a nice, easy-to-use function. The complete function is shown here. (I have also uploaded it to the Scripting Guys Script Repository for ease of copying.)

function Query-SPSearch {

    param(

        [Parameter(Mandatory=$true)]

        [String]

        $WebApplicationPath,

        [Parameter(Mandatory=$true)]

        [String]

        $KeywordQuery,

        [Parameter()]

        [Int32]

        $Count = 10

    )

$QueryXml = @"

<QueryPacket xmlns="urn:Microsoft.Search.Query" >

    <Query>

        <Context>

            <QueryText type="STRING">$KeywordQuery</QueryText>

        </Context>

        <Range>

            <Count>$Count</Count>

        </Range>

        <IncludeSpecialTermResults>false</IncludeSpecialTermResults>

        <PreQuerySuggestions>false</PreQuerySuggestions>

        <HighlightQuerySuggestions>false</HighlightQuerySuggestions>

        <IncludeRelevantResults>true</IncludeRelevantResults>

        <IncludeHighConfidenceResults>false</IncludeHighConfidenceResults>

    </Query>

</QueryPacket>

"@

    $ServicePath = "/_vti_bin/search.asmx"

 

    $SearchWS = New-WebServiceProxy -Uri ($WebApplicationPath + $ServicePath) -UseDefaultCredential

    $Results = $SearchWS.QueryEx( $QueryXml )

    # we excluded all other result sets, but just in case get the one we want:

    $Results.Tables["RelevantResults"]

}

Typical usage for this function would be as follows:

Query-SPSearch -WebApplicationPath “http://sharepoint10” -KeywordQuery “SharePoint test” -Count 20 | Format-Table Title, Author, Path

I hope this helps you get started down the road to discovering and utilizing the web services in SharePoint. Be sure to check out my SharePoint and Windows PowerShell posts on my Beside the Point blog on MSDN. Thanks!

~Josh

Thanks, Josh, for sharing your time and knowledge. I had heard about the web services in SharePoint, but I have never played around with them, and I have since completely forgotten about them. This gives me something with which to experiment. Sweet! Join me tomorrow for a great guest post by Jan Egil Ring about working with Microsoft Exchange Web Services—seems to be more than one theme at work here. See you!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Learn to Use the Exchange Web Services with PowerShell

$
0
0

Summary: In this guest blog article written by Microsoft MVP, Jan Egil Ring, you will learn how to use Exchange Web Services (EWS) with Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Today, we begin Guest Blogger Weekend. We are really fortunate today to have a great blog post by Microsoft Windows PowerShell MVP, Jan Egil Ring. Here is what Jan says about himself.

Photo of Jan Egil Ring

Jan Egil Ring works as a Senior Consultant on the Infrastructure Team at Crayon, Norway. He mainly works with Microsoft server-products, and has a strong passion for Windows PowerShell. In addition to being a consultant, he is a Microsoft Certified Trainer. He has obtained several certifications such as MCITP: Enterprise Administrator and MCITP: Enterprise Messaging Administrator. In January 2011, he was awarded the Microsoft Most Valuable Professional Award for his contributions in the Windows PowerShell technical community.
Contact information:
Website: blog.powerhell.no
Twitter: http://twitter.com/janegilring
LinkedIn: Jan Egil Ring

Without further ado, take it away Jan…

With the release of Microsoft Exchange Server 2007, we were introduced to Exchange Web Services (EWS), which is continued and further improved in Exchange Server 2010. EWS provides the functionality to enable client applications to communicate with the Exchange Server. It provides access to much of the same data that is made available through Microsoft Office Outlook. EWS clients can integrate Outlook data into line-of-business applications.

Exchange Web Services provides the following types of operations:

  • Availability
  • Bulk Transfer (new in Exchange 2010)
  • Conversations (new in Exchange 2010)
  • Delegate Management
  • Exchange Store Search
  • Exchange Search (new in Exchange 2010)
  • Federated Sharing (new in Exchange 2010)
  • Folder
  • Inbox Rules (new in Exchange 2010)
  • Item
  • Mail Tips (new in Exchange 2010)
  • Messaging Records Management
  • Message Tracking (new in Exchange 2010)
  • Notification
  • Service Configuration (new in Exchange 2010)
  • Synchronization
  • Unified Messaging (new in Exchange 2010)
  • User Configuration (new in Exchange 2010)
  • Utility

For more information about these features, see Exchange Web Services on MSDN.

Exchange Web Services and Windows PowerShell

As stated earlier, EWS can integrate into line-of-business applications, which typically means that working with EWS is a developer task. However, Exchange administrators without any developer background can also leverage EWS by using Windows PowerShell.

The following example demonstrates how I needed to leverage EWS to perform a specific task.

A customer that was migrating from Exchange Server 2003 to Exchange Server 2010 had previously used a non-Microsoft application for room bookings in Exchange. This application required them to change the default IPM.Appointment form in all mailboxes. In practice, this means that the option in the calendar folders was changed from IPM.Appointment to the name of the non-Microsoft application in the field shown here:

Image of folder information

The functionality that was provided by the non-Microsoft application was now integrated into Exchange Server 2010 and Outlook 2010, so they decided to remove the non-Microsoft application. The problem is that the custom form wasn`t automatically removed from the mailboxes, and there isn’t any way to change this option by using the standard Exchange management tools.

However, the property that we need to change can be changed by using EWS. The first thing to do before leveraging EWS from Windows PowerShell is to download and install the Exchange Web Services Managed API. Then the DLL that is available after the Exchange Web Services Managed API is installed can be imported into Windows PowerShell 2.0 by using Import-Module, as shown here:

Import-Module -Name "C:\Program Files\Microsoft\Exchange\Web Services\1.1\Microsoft.Exchange.WebServices.dll"

When the DLL is loaded, we have access to the Microsoft.Exchange.WebServices.Data.ExchangeService namespace, which we can use to create an ExchangeService object to connect to EWS. We use New-Object to set up an instance (object) in the namespace. We then specify the credentials to use. In the following example, we use the credentials of the currently logged on Windows user. We could also use alternate credentials, which is demonstrated in the full example at the end of this post. Finally, we set the Autodiscover URL that the service will use to locate the EWS endpoints that are configured in Exchange. This is shown here:

$exchService = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService
$exchService.UseDefaultCredentials = $true
$exchService.AutodiscoverUrl(user01@domain.com)

Note that the EWS Managed API 1.1 defaults to Exchange2010_SP1 as the Exchange version that it is connecting to. If you are running another version of Exchange or another service pack, you must specify the correct version by passing the ArgumentList parameter to New-Object.

Next, we use the Folder.Bind method of the Microsoft.Exchange.WebServices.Data namespace. We specify the ExchangeService object that we created earlier as the first argument, and then we add an instance of the Microsoft.Exchange.WebServices.Data.WellKnownFolderName namespace that points to the calendar folder as the second argument. To see the details of what arguments the Folder.Bind method accepts, we can use the Get-Member cmdlet in Windows PowerShell, or we can look at the MSDN documentation for this specific class.

$Calendar = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchservice,[Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Calendar)

Here we can see how the Calendar object looks in Windows PowerShell:

Image of command output

The property that we want to set is an extended property, which isn`t available by default. For the specific task, I`m not interested in viewing the existing property value, so I`ll use the SetExtendedProperty method on the Calendar object, which we can see on MSDN or by using the Get-Member cmdlet as shown here:

Image of command output

Before we can use the SetExtendedProperty method, we need to determine which argument to pass to it. These are the properties we want to change. The way I found the property names was by looking up the property pages for a calendar folder by using Exchange 2010 SP 1ExFolders:

Image of Property page

What we need is the MAPI Property Tags, which we can see in the PropTag column above. As we can see from the following MSDN documentation, the first part is the property identifier. This is the value we need when we create ExtendedPropertyDefintion objects. For the two properties that we want to change, the values would be 0x36E5 and 0X36E6.

Now that we know the MAPI Property identifiers, we can define the properties to be changed as Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition objects, as shown here:

$PR_DEF_POST_MSGCLASS_W = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(0x36E5,[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String)

$PR_DEF_POST_DISPLAYNAME_W = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(0x36E6,[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String)

We`re now ready to use the SetExtendedProperty method, which takes two arguments according to the MSDN documention. The first one is the ExtendedPropertyDefintion, which we defined in the previous step, and the second argument is the value that we want to set. In this case, there are two properties that we need to change; thus, we need to call the SetExtendedProperty method twice:

$calendar.SetExtendedProperty($PR_DEF_POST_DISPLAYNAME_W,"Appointment" )
$calendar.SetExtendedProperty($PR_DEF_POST_MSGCLASS_W,"IPM.Appointment")

At last, we need to call the Update method to perform the actual update of the Calendar object:

$calendar.Update()

To perform this operation on all the mailboxes, you would first need to configure Exchange Impersonation (which is very easy to do in Exchange Server 2010) for the user name that will be used to run the script. Then you need to retrieve all mailboxes and perform a foreach loop.

Of course, you will also want to add error handling and logging when you run this in production, but here is an example to get you started:

Import-Module -Name "C:\Program Files\Microsoft\Exchange\Web Services\1.1\Microsoft.Exchange.WebServices.dll"

$Credentials = New-Object Microsoft.Exchange.WebServices.Data.WebCredentials("username","password","domain")
$exchService = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService
$exchService.Credentials = $Credentials

$mailboxes = Get-Mailbox

foreach ($mailbox in $mailboxes) {
$exchService.AutodiscoverUrl($mailbox.PrimarySmtpAddress)

$Calendar = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchservice,[Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Calendar)

$PR_DEF_POST_MSGCLASS_W = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(0x36E5,[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String)

$PR_DEF_POST_DISPLAYNAME_W = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(0x36E6,[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String)

$calendar.SetExtendedProperty($PR_DEF_POST_DISPLAYNAME_W,"Appointment" )
$calendar.SetExtendedProperty($PR_DEF_POST_MSGCLASS_W,"IPM.Appointment")
$calendar.Update()
}

When you start to explore EWS, there is a tool called EWS Editor that is available on CodePlex, which can help you familiarize yourself in depth with items, folders, and their properties. If you need any assistance related to working with Exchange Web Services, I would recommend that you to use the Exchange Server Development Forum on the Exchange Server TechCenter. I would like to thank Glen Scales for his assistance in regards to working with extended properties in EWS.

Additional resources

Introduction to Exchange Web Services
Using Exchange Web Services Managed API 1.0 from PowerShell 2.0
EWS Editor
Send Email from Exchange Online by Using PowerShell
Glen's Exchange Dev Blog
Mike Pfeiffer`s EWS blog posts

Thank you, Jan. This is really some good stuff. Guest Blogger Weekend will continue tomorrow when Eric Wright will talk about how to use Windows PowerShell to move Active Directory computers, based on IP address. 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Move Computers Based on IP Addresses: Part 1

$
0
0

Summary: In this guest blog post written by Eric Wright, you will learn how to use the Windows PowerShell snap-in, Quest ActiveRoles, to move computers that are organized in Active Directory, based on their IP addresses.

Microsoft Scripting Guy, Ed Wilson, is here. Guest Blogger Weekend continues. Today we have a guest blog written by Eric Wright. Here is what Eric has to say about himself.

Photo of Eric Wright

I am a systems architect and blogger, and I work with Microsoft tools, Windows PowerShell, virtualization, and various web technologies. I’m a big fan of automation and scripting to simplify and enhance systems administration.
Contact information:
Website: DiscoPosse—Using the chicken to measure IT
Twitter: http://www.twitter.com/discoposse
LinkedIn: http://ca.linkedin.com/pub/eric-wright/3/7b4/bb6

Note: This script uses the Windows PowerShell snap-in, Quest ActiveRoles. This process will work with any version of Windows Server, but you must use Quest ActiveRoles if you are running a version earlier than Windows Server 2008 R2 on your domain controllers. Tomorrow in Part 2 of this blog, I document the exact process, but we use Active Directory domain controllers with Windows Server 2008 R2 or the Active Directory Management Gateway Service.

In my organization, I have chosen to organize my Active Directory (AD) organizational unit (OU) structure based on physical locations. A common challenge is that our technical support team does not always move computer accounts into the proper structure in Active Directory. Another issue is that computers may not be deleted from the domain when they are decommissioned. This confuses other processes that use Active Directory as their authoritative source for computer object information.

To tackle this issue, I created a Windows PowerShell script that runs as a batch process and will move the computer objects into OUs based on their IP addresses.

In my example, I am looking for only Windows 7 computers, but this can be flavored to match any selection criteria you need. The structure of the script is to do the following:

  1. Check the operating system for Windows 7 (any version).
  2. Check to see if the computer has been off the domain.
  3. If the computer has been off the network for 60 days, move it to a “Disabled” OU.
  4. If the computer has been off the network for 90 days, delete it.
  5. Check for the last DNS registration of the computer, and move it to an OU based on its IP information.

For our script to work, we need to have the Windows PowerShell snap-in, Quest ActiveRoles, installed on the computer that will be running the script for us. 

If you are running Active Directory Domain Controllers with Windows Server 2008 R2, you can use the native ActiveDirectory Windows PowerShell module. I will post the script for using the Windows Server 2008 R2 module tomorrow in part 2 of this series.

We will also need to define the IP subnets and the OU structure so that we can match the computer object’s IP information and move it to its correct location in AD.

First we load the Quest ActiveRoles snap-in as shown here:

Add-PsSnapIn Quest.ActiveRoles.ADManagement -ErrorAction SilentlyContinue

Next, we want to define two parameters for the age of the computers. I call these $old and $veryold, and for my example, I have set them as 60 days and 90 days respectively. You can adjust these easily to suit your needs.

$old = (Get-Date).AddDays(-60) # Modify the -60 to match your threshold

$veryold = (Get-Date).AddDays(-90) # Modify the -90 to match your threshold 

Now the fun part! Because will capture the IP information as a string and not as an integer, this makes it a bit more challenging to figure out what subnet we are in. This example has three subnets: 192.168.1.0/24, 192.168.2.0/24, and 192.168.3.0/24. I have chosen class C subnets for this script to match my structure, but you may have to get more creative if you have a more complex network configuration.

We will define our IP range variables as regular expressions (or Regex as they are commonly known), so that we can match the characters appropriately. Sorry kids, but it is goodbye GUI and hello Regex for this stuff.

$Site1IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:1)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.1.0/24

$Site2IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:2)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.2.0/24

$Site3IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:3)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.3.0/24

I know you are probably thinking it is time to just retrain the support staff to do this, right? Do not be frightened away just yet. Regex is easier than you may think once you use it more and can break it down into sensible chunks. It is as simple as reading a map (OK, that is not always simple).

Here Be Regex Dragons!

 Image of map

The key information we see here is pretty readable. Because we know the first three octets are static, we define them easily, as follows:

“\b(?:(?:192)\.)” + “\b(?:(?:168)\.)” + “\b(?:(?:1)\.)”

This shows us matching as 192.168.1., which takes care of the first three octets. Because it is a class C IP range, we want to capture from 0-255 in the fourth octet, which is done like this:

“\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))”

We read the string and look for matching of three distinct ranges, which are 0-199, 200-249, or 250-255. The OR is the important part of the phrasing, and it is represented by the | symbol, which is known as the “pipe” symbol to most.

Here is the breakdown of those three ranges:

250-255: 25[0-5]

200-249: 2[0-4][0-9]

0-199: [01]?[0-9][0-9]?

 OK, we are through the tough part. Now we define the OU structure for the Disabled and the three locations:

$DisabledDN = "OU=Disabled,DC=yourdomain,DC=com"

$Site1DN = "OU=Site1,DC=yourdomain,DC=com"

$Site2DN = "OU=Site2,DC=yourdomain,DC=com"

$Site3DN = "OU=Site3,DC=yourdomain,DC=com"

This is where we begin the object query. Because we want to find out how “old” the computer account is, we will bring in the pwdLastSet property from Active Directory, in addition to the default values. (Note that the Quest ActiveRoles parameter uses pwdLastSet, whereas the native Microsoft parameter in tomorrow’s blog uses PasswordLastSet.) This will tell us the last time the hidden password that is negotiated between the computer account and Active Directory has been reset. The default maximum duration is 30 days, so as long as a computer is connecting to the domain regularly, it should always be less than 30 days old.

If you want to modify the operating systems that get captured, you simply change the selection parameters of the Get-QADComputer query as shown here:

Get-QADComputer -ComputerRole member -IncludedProperties pwdLastSet -SizeLimit 0 -OSName 'Windows 7*' | ForEach-Object { THE REST OF OUR SCRIPT GOES IN HERE }

Our script will query AD for each Computer object, and we will run the next bunch of processes against each object in the ForEach-Object loop. All of the following content is stored inside the curly brackets.

Let’s ignore any failure messages from the IP lookup with this:

trap [System.Net.Sockets.SocketException] { continue; }

We need to use the Computer name, DN, and pwdLastSet, so let’s set those as variables from the query result. We also want to capture the current container, so we use a simple Replace command to derive the current OU location:

$ComputerName = $_.Name

$ComputerDN = $_.DN

$ComputerPasswordLastSet = $_.pwdLastSet

$ComputerContainer = $ComputerDN.Replace( "CN=$ComputerName," , "")

 Now we can work with the Computer account age and delete or move them as necessary:

# If the computer is more than 90 days off the network, remove the computer object

if ($ComputerPasswordLastSet -le $veryold) {

            Remove-QADObject -Identity $ComputerDN

}

# Check to see if it is an "old" computer account and move it to the Disabled\Computers OU

if ($ComputerPasswordLastSet -le $old) {

$DestinationDN = $DisabledDN

Move-QADObject -Identity $ComputerDN -NewParentContainer $DestinationDN

}

Next, we query DNS for the IP address of the computer. We will set the $IP value as $NULL first, so that if the query fails, it will be dealt with correctly later in the process. If we don’t set the NULL value, it retains the IP from the last lookup, and it will move the computer incorrectly.

$IP = $NULL

$IP = [System.Net.Dns]::GetHostAddresses("$ComputerName")

Now it is time to check for the IP range to set the destination DN accordingly. If you have a majority of systems in some network ranges, you may want to move those up to the top of the If statement so that they are processed early, which will save some time.

if ($IP -match $Site1IPRange) {

            $DestinationDN = $Site1DN

}

ElseIf ($IP -match $Site2IPRange) {

            $DestinationDN = $Site2DN

}

ElseIf ($IP -match $Site3IPRange) {

            $DestinationDN = $Site3DN

}

Else {

            # If the subnet does not match we should not move the computer so we do nothing

            $DestinationDN = $ComputerContainer

}

Let’s do a health check on our IP selection:

 Image of command output

And here is the last step to actually move the object to the new destination OU. This is where our NULL IP comes into play because we have assumed that if the IP is NULL, it is “off network” and the aged account process has already dealt with it:

if ($IP -ne $NULL) {

            Move-QADObject -Identity $ComputerDN -NewParentContainer $DestinationDN

}

And we made it! Another exciting tip with this script is that you can run all of the QADObject cmdlets with the WhatIf parameter, which will output the result to the screen rather than perform the move or delete, so you can test drive the script before you implement it.

You can download the script in its full form from the TechNet Resources Gallery.

~Eric

Thank you, Eric. This is a really cool solution to a rather common problem. I am looking forward to part 2 tomorrow from Eric.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Move Computers Based on IP Addresses: Part 2

$
0
0

Summary: In this blog, Eric Wright revises his script by using Active Directory cmdlets to move computers that are organized in Active Directory, based on their IP addresses.

Microsoft Scripting Guy, Ed Wilson, is here. In today’s post, guest blogger, Eric Wright, reprises yesterday’s post to use the Windows Active Directory module cmdlets. Here is a bit about Eric.

Photo of Eric Wright

I am a systems architect and blogger, and I work with Microsoft tools, Windows PowerShell, virtualization, and various web technologies. I’m a big fan of automation and scripting to simplify and enhance systems administration.
Contact information:
Website: DiscoPosse—Using the chicken to measure IT
Twitter: http://www.twitter.com/discoposse
LinkedIn: http://ca.linkedin.com/pub/eric-wright/3/7b4/bb6

Note: This script requires Active Directory domain controllers with Windows Server 2008 R2 or the Active Directory Management Gateway Service. For more information about installing the gateway service, see the Hey, Scripting Guy! Blog Install Active Directory Management Service for Easy PowerShell Access. With Windows Server 2008 R2, you can use the native Active Directory PowerShell module. If you are running an earlier version of Windows Server on your domain controllers, or if you do not have the Active Directory Management Gateway Service installed, you can use the process that I documented yesterday in part 1, which uses the Windows PowerShell snap-in, Quest ActiveRoles.

In my organization, I have chosen to organize my Active Directory (AD) organizational unit (OU) structure based on physical locations. A common challenge is that our technical support team does not always move computer accounts into the proper structure in Active Directory. Another issue is that computers may not be deleted from the domain when they are decommissioned. This confuses other processes that use Active Directory as their authoritative source for computer object information.

To tackle this issue, I created a Windows PowerShell script that runs as a batch process and will move the computer objects into OUs based on their IP addresses.

In my example, I am looking for only Windows 7 computers, but this can be flavored to match any selection criteria you need. The structure of the script is to do the following:

  1. Check the operating system for Windows 7 (any version).
  2. Check to see if the computer has been off the domain.
  3. If the computer has been off the network for 60 days, move it to a “Disabled” OU.
  4. If the computer has been off the network for 90 days, delete it.
  5. Check for the last DNS registration of the computer, and move it to an OU based on its IP information.

We also need to define the IP subnets and the OU structure so that we can match the computer object’s IP information and move it to its correct location in AD.

First, we load the ActiveDirectory module as follows:

Import-Module ActiveDirectory

Next, we want to define two parameters for the age of the computers. I call these $old and $veryold, and for my example, I have set them as 60 days and 90 days respectively. You can adjust these easily to suit your needs.

$old = (Get-Date).AddDays(-60) # Modify the -60 to match your threshold

$veryold = (Get-Date).AddDays(-90) # Modify the -90 to match your threshold 

Now the fun part! Because we will capture the IP information as a string and not an integer, this makes it a bit more challenging to figure out what subnet we are in. This example has three subnets, which are 192.168.1.0/24, 192.168.2.0/24 and 192.168.3.0/24. I have chosen class C subnets for this script to match my structure, but you may have to get more creative if you have a more complex network configuration.

We will define our IP range variables as Regular Expressions (or Regex as they are commonly known) so that we can match the characters appropriately. Sorry kids, but it is goodbye GUI and hello Regex for this stuff.

$Site1IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:1)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.1.0/24

$Site2IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:2)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.2.0/24

$Site3IPRange = "\b(?:(?:192)\.)" + "\b(?:(?:168)\.)" + "\b(?:(?:3)\.)" + "\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))" # 192.168.3.0/24

I know you are probably thinking it is time to just retrain the support staff to do this, right? Do not be frightened away just yet. Regex is easier than you may think once you use it more and can break it down into sensible chunks. It is as simple as reading a map (OK, that is not always simple).

Here Be Regex Dragons!

 Image of map

The key information we see is pretty readable. Because we know the first three octets are static we define them easily, as follows:

“\b(?:(?:192)\.)” + “\b(?:(?:168)\.)” + “\b(?:(?:1)\.)”

This shows us matching as 192.168.1., which takes care of the first three octets. Because it is a class C IP range, we want to capture from 0-255 in the fourth octet, which is done like this:

“\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))”

We read the string and look for matching of three distinct ranges, which are 0-199, 200-249, or 250-255. The OR is the important part of the phrasing, and it is represented by the | symbol, which is known as the “pipe” symbol to most.

Here is the breakdown of those three ranges:

250-255: 25[0-5]

200-249: 2[0-4][0-9]

0-199: [01]?[0-9][0-9]?

OK, we are through the tough part. Now we define the OU structure for the Disabled and the three locations:

$DisabledDN = "OU=Disabled,DC=yourdomain,DC=com"

$Site1DN = "OU=Site1,DC=yourdomain,DC=com"

$Site2DN = "OU=Site2,DC=yourdomain,DC=com"

$Site3DN = "OU=Site3,DC=yourdomain,DC=com"

This is where we begin the object query. Because we want to find out how “old” the computer account is, we will bring in the PasswordLastSet property from Active Directory, in addition to the default values. (Note that the native Microsoft parameter uses PasswordLastSet, whereas the Quest ActiveRoles parameter in yesterday's blog uses pwdLastSet.) This will tell us the last time the hidden password that is negotiated between the computer account and Active Directory has been reset. The default maximum duration is 30 days, so as long as a computer is connecting to the domain regularly it, should always be less than 30 days old.

If you want to modify the operating systems that get captured, you simply change the selection parameters of the Get-ADComputer query as shown here:

Get-ADComputer -Filter { OperatingSystem -like "Windows 7*" } -Properties PasswordLastSet | ForEach-Object {THE REST OF OUR SCRIPT GOES IN HERE }

Our script will query AD for each Computer object, and we will run the next bunch of processes against each object in the ForEach-Object loop. All of the following content is stored inside the curly brackets.

Let’s ignore any failure messages from the IP lookup with this:

trap [System.Net.Sockets.SocketException] { continue; }

We need to use the Computer name, DN, and PasswordLastSet so let’s set those as variables from the query result. We also want to capture the current container, so we use a simple Replace command to derive the current OU location:

$ComputerName = $_.Name

$ComputerDN = $_.DN

$ComputerPasswordLastSet = $_.PasswordLastSet

$ComputerContainer = $ComputerDN.Replace( "CN=$ComputerName," , "")

Now we can work with the Computer account age and delete or move them as necessary:

# If the computer is more than 90 days off the network, remove the computer object

if ($ComputerPasswordLastSet -le $veryold) {

            Remove-ADObject -Identity $ComputerDN

}

# Check to see if it is an "old" computer account and move it to the Disabled\Computers OU

if ($ComputerPasswordLastSet -le $old) {

            $DestinationDN = $DisabledDN

            Move-ADObject -Identity $ComputerDN -TargetPath $DestinationDN

Next, we query DNS for the IP address of the computer. We will set the $IP value as $NULL first, so that if the query fails it will be dealt with correctly later in the process. If we don’t set the NULL value, it retains the IP from the last lookup, and it will move the computer incorrectly.

$IP = $NULL

$IP = [System.Net.Dns]::GetHostAddresses("$ComputerName")

Now it is time to check for the IP range to set the destination DN accordingly. If you have a majority of systems in some network ranges, you may want to move those up to the top of the If statement so that they are processed early, which will save some time:

if ($IP -match $Site1IPRange) {

            $DestinationDN = $Site1DN

}

ElseIf ($IP -match $Site2IPRange) {

            $DestinationDN = $Site2DN

}

ElseIf ($IP -match $Site3IPRange) {

            $DestinationDN = $Site3DN

}

Else {

            # If the subnet does not match we should not move the computer so we do Nothing

            $DestinationDN = $ComputerContainer

}

And here is the last step to actually move the object to the new destination OU. This is where our NULL IP comes into play because we have assumed that if the IP is NULL, it is “off network” and the aged account process has already dealt with it:

if ($IP -ne $NULL) {

            Move-ADObject -Identity $ComputerDN -TargetPath $DestinationDN

}

And we made it! Another exciting tip with this script is you can run all of the ADObject cmdlets with the WhatIf parameter, which will output the result to the screen rather than perform the move or delete, so you can test drive the script before you implement it.

Here is this script in its full form from the TechNet Resources Gallery.

Thank you, Eric. This has been a great series. I appreciate you taking the time to share your knowledge with us.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Learn Simple Ways to Handle Windows PowerShell Arrays

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, teaches you how to handle arrays in Windows PowerShell.

Hey, Scripting Guy! Question Hey, Scripting Guy! One of the things I do not understand is arrays. I mean, I really do not get it at all. In VBScript, it seemed like I always tripped up on arrays. In Windows PowerShell, it seems that no one ever talks about arrays. I am not claiming that I was ever a great VBScript person, but at least I got to the point where I could sort of read and understand a script that I was copying from the Script Center. In fact, at work, everyone will tell you that I am actually the scripting guy (no offence). In VBScript there were tools that I could use (like isarray) that would tell me if I had an array—but in Windows PowerShell, I never see anything of the sort. What gives?

—JF

Hey, Scripting Guy! Answer Hello JF,

Microsoft Scripting Guy, Ed Wilson, is here. I do not mind if you are considered the scripting guy at your work—in fact, I am flattered. I think scripting guy should be a job title just like network administrator or system analyst. After getting your email at scripter@microsoft.com, I went back through the blogs I have written over the last three years or so, and sure enough, I have not written a lot about arrays. I have one blog called Using PowerShell Get-Member to Explore the .NET Framework, and I have a few other posts that are a bit more advanced, but it seems to be that I have not gotten down and focused on using Windows PowerShell to work with arrays. Thank you for calling this to my attention, and I intend to rectify this issue immediately with Array Week.

What is an array?

An array is a way of storing data that permits more than one item to be stored in the variable or in the field. For example, suppose I want to store a single number in a variable, all I need to do is to use a straightforward value assignment. This command appears here.

$a = 1

But what if I need to store two numbers in the same variable? In this case, I use a comma to separate the values that I want to store. This is illustrated here.

$b = 2,3

There is no need to have a space between the comma and the next number when storing values in an array. In fact, Windows PowerShell is extremely flexible when it comes to spaces around the comma.  All of the following commands work and create an array containing two elements.

$c = 4 , 5

$d = 6, 7

$e = 8 ,9

These commands and their associated output are shown in the following image. 

Image of command output

Elements, indexes, and values, Oh My!

Each item that is stored in an array is an element. When working with an array, we need a way to address each item that is contained in array. To do this, we use index numbers. The index numbers reference each element that is stored in an array. The thing that gets a bit confusing is that in Windows PowerShell, arrays always begin with zero. Therefore, the first element in an array is indexed by 0. You can also refer to that as element zero. The table that follows illustrates these concepts.

Element number

1

2

3

4

Index number

0

1

2

3

Value in the array

A

B

C

D

To create an array with four elements, I assign a value to each element by separating each value with a comma. In the following code, I assign the letters A, B, C, and D to an array named ARRAY. Windows PowerShell stores the array in a variable, and therefore, referencing the array requires a dollar sign in front of the variable. To see the data that is stored in an array, I can simply call the variable, and each element of the array appears on its own line in the Windows PowerShell console. The commands to create an array and view its contents are shown here.

PS C:\> $ARRAY = "A","B","C","D"

PS C:\> $ARRAY

A

B

C

D

In other languages, it is possible to create an array that is zero-based or one-based. Having a one-based array avoids the confusion of having element 0 addressed by index 1, but it introduces another type of confusion—is the array I am working on zero-based or one-based? In any language where the capability to create an array that is zero-based or one-based exists, it is essential to have the capability to discover the lower boundary of array. To discover the lower boundary of an array, use the GetLowerBound method. The use of the GetLowerBound method is shown here.

PS C:\> $ARRAY.GetLowerBound(0)

0

Of course, in Windows PowerShell the lower boundary of an array is zero; and therefore, the command is not useful. What is useful is the GetUpperBound method, because it indicates how many items the array contains. The use of the GetUpperBound method is shown here.

PS C:\> $ARRAY.GetUpperBound(0)

3

When I have a good idea of the dimensions (the lower boundary and the upper boundary) of my array, I can use square brackets to retrieve individual elements from the array. The technique of retrieving individual elements from an array is called indexing. Therefore, I use square brackets to index into my array and retrieve the individual elements. To obtain the first element in the array, I use index zero as shown here.

PS C:\> $ARRAY[0]

A

If I want to obtain the third item (element) in my array, I use index two (because the array is zero-based, I always add 1). This command is shown here.

PS C:\> $ARRAY[2]

C

The commands to create an array, obtain its boundaries, and index it into the first and third elements of the array are shown here with the associated output.

Image of command output

In other languages, it is common to use the for statement to walk through an array. This technique also works in Windows PowerShell. The steps to do are:

  1. Use the for statement.
  2. Use the GetLowerBound method to obtain the lower boundary of the array.
  3. Use the GetLowerBound method to obtain the upper boundary of the array.
  4. Use a counter variable to keep track of the element numbers.
  5. Use the + + operator to increment the counter variable.
  6. Use the counter variable to index directly into the array.

The code to use the for statement to walk through the $ARRAY array is shown here.

for($i = $ARRAY.GetLowerBound(0); $i -le $array.GetUpperBound(0); $i++) {$ARRAY[$i]}

Because the lower boundary of a Windows PowerShell array is always zero, the command can be shortened a bit by using 0 in place of the GetLowerBound command. The simplified version of the command is shown here.

for($i = 0; $i -le $array.GetUpperBound(0); $i++) {$ARRAY[$i]}

Two properties describe how many elements an array contains: the Length property and the Count property. The thing to keep in mind is that both the Length and the Count properties are one-based—that is, the first element is 1. Therefore, if you are using the for technique to walk through an array, it is necessary to subtract 1 from the Length property or the Count property. The following two commands illustrate these techniques.

for($i = 0; $i -le $array.count -1; $i++) {$ARRAY[$i]}

for($i = 0; $i -le $array.length -1; $i++) {$ARRAY[$i]}

The command to use the for statement with the GetLowerBound and the GetUpperBound methods, in addition to the other versions of the command, are shown in the following image.

Image of command output

Send the array through the pipeline

One of the really powerful aspects of Windows PowerShell is that it automatically handles arrays; therefore, things like lower boundaries, upper boundaries, elements, and index numbers are avoidable. For example, by using the pipeline and the Foreach-Object cmdlet, all the complexity disappears. The steps to use the Foreach-Object cmdlet to address elements in an array are:

  1. Pipe the array to the Foreach-Object cmdlet.
  2. Inside the script block that is associated with the Foreach-Object cmdlet, use the $_ automatic variable to reference each element of the array.

The following command illustrates the pipeline technique to access elements in an array.

$ARRAY | foreach-object { $_ }

If I decide to use the % alias for the Foreach-Object cmdlet, the command becomes even shorter. The following command illustrates this technique.

$ARRAY | % { $_ }

The following image illustrates using the pipeline technique to access elements in an array, along with the associated output from the commands.

 Image of command output

JF, that is all there is for part one. Array Week will continue tomorrow when I continue talking about creating and manipulating arrays.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>