Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Learn the Easy Way to Use PowerShell to Get File Hashes

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using the Windows PowerShell PSCX Get-Hash cmdlet to get hash files in a directory.

Hey, Scripting Guy! Question

Hey, Scripting Guy! I have a question that I hope will not require a lot of work on your part. I need to find the MD5 hash of files and folders. I use this information to determine if something has changed on a system. The problem is that everything I have seen appears to make this really complicated. Is there anything you can do that will make this easier?

—MO

Hey, Scripting Guy! Answer

 Hello MO,

Microsoft Scripting Guy, Ed Wilson, is here. It is official. I am going to be doing a book signing at Microsoft TechEd 2012. I will be autographing copies of my Windows PowerShell 2.0 Best Practices book that was published by Microsoft Press. Incidentally, the autograph session will take place at the O’Reilly booth on Tuesday at 10:30 (refer to the following blog for a complete schedule: The Scripting Guys Reveal Their TechEd 2012 Schedule. You will want to get to the booth early because we will be giving away autographed copies to the first 25 people in line. If you have a copy that you want to ensure gets signed, bring it to the book signing. Or just bring it along to the Scripting Guys booth, where I will also be glad to sign books, T-shirts, hats…whatever you happen to have (not blank checks, however).

Note   This is the third in a series of four Hey, Scripting Guy! blogs about using Windows PowerShell to facilitate security forensic analysis of a compromised computer system. The intent of the series is not to teach security forensics, but rather to illustrate how Windows PowerShell could be utilized to assist in such an inquiry. The first blog discussed using Windows PowerShell to capture and to analyze process and service information.  The second blog talked about using Windows PowerShell to save event logs in XML format and perform offline analysis.

MD5 hashing of files

MO, actually I do not think that doing a MD5 hash is all that complicated—a bit tedious, but not overly complicated. The MD5Class is documented on MSDN, and it is not too bad. However, there is no real reason for IT Pros to have to mess with the .NET Framework classes if they do not want to do so. The reason? Well, the reason is that the Windows PowerShell Community Extensions includes a function that will get the MD5 hash for you. It works just like any other Windows PowerShell function or cmdlet, and it is extremely easy to use.

Note   I have written several blogs about the PowerShell Community Extensions (PSCX). One especially well received blog, Tell Me About PowerShell Community Extensions, was written by one of the developers of the project, Windows PowerShell MVP, Keith Hill. To obtain the PSCX, download them from CodePlex, and follow the installation instructions in Keith’s blog.

After you install the PSCX, import the module by using the following command:

Import-Module pscx

The cmdlet you want to use is the Get-Hash cmdlet. It accepts piped input for the path to the file to hash, and it returns an object with the path to the file and the hash value. You can specify the type of hash to use (MD5, SHA1, SHA256, SHA384, SHA512, or RIPEMD160), but this is not a requirement because it selects an MD5 hash by default. The Get-Hash cmdlet does not hash directories, only files. Therefore, an error returns (Access is denied) when the Get-Hash cmdlet runs across directories. There are several approaches to dealing with this issue:

  1. Ignore the error.
  2. Tell the cmdlet to ignore the error by specifying an error action.
  3. Develop a filter (by using the Where-Object) that returns only files.

Ignore the error

The following command generates an MD5 hash for every file in the c:\fso directory:

dir c:\fso -Recurse | Get-Hash

The command and its associated output are shown here (errors appear in the output due to the presence of child directories).

Image of command output

Tell the cmdlet to ignore the error

One way to deal with expected errors is to tell the cmdlet to ignore the error for you. Windows PowerShell cmdlets implement a common parameter named ErrorAction. Use of the ErrorAction ubiquitous parameter permits you to control the error action preference on a command-by-command basis. You can achieve the behavior of the VBScript script On Error Resume Next functionality if you so desire, but the real power is that it is doable on a cmdlet-by-cmdlet basis. There is also a global variable named $ErrorActionPreference that enables you to set the behavior on a global basis. There are four allowable values for the ActionPreference. These values are shown here (retrieved by using the GetValues static method from the System.Enum .NET Framework class).

PS C:\> [enum]::GetValues("System.Management.Automation.ActionPreference")

SilentlyContinue

Stop

Continue

Inquire

The SilentlyContinue value tells Windows PowerShell to not report any errors, but to continue to attempt to process the next command. This is the On Error Resume Next type of setting. Stop means that Windows PowerShell will halt execution when reaching an error. Continue means that Windows PowerShell will inform you of the error, and will then continue processing if possible (this is the default behavior). Inquire means that Windows PowerShell will let you know an error occurred, and ask you if you want to continue execution or halt the command. There are numerical equivalents to the values, and therefore, SilentlyContinue is equal to 0. The ErrorAction common parameter also has a parameter alias of EA. Therefore, by using the previous information, the following command appears.

dir c:\fso -Recurse | Get-Hash -ea 0

The command to retrieve an MD5 hash value for each file in the c:\fso directory and to suppress any errors that may arise is shown here, along with the output associated with the command.

Image of command output

Develop a filter

To create a filter that only returns files, use the psiscontainer property with Where-Object. Because I do not want to retrieve any ps container type of objects, I use the not operator (!) to tell Windows PowerShell that I want no containers. This approach is shown here.

dir c:\fso -Recurse | Where-Object {!$_.psiscontainer } | get-hash

The command and the output from the command are shown here.

Image of command output

From a performance standpoint, filtering out only files was a bit faster—not much, but a little: 3.06 total seconds as opposed to 3.28 total seconds. Due to file system caching, I rebooted my computer, ran the Measure-Command cmdlet, rebooted the computer, and ran the Measure-Command cmdlet. This eliminated caching from the equation. The commands and the associated output from the commands are shown here.

PS C:\> Measure-Command { dir c:\fso -Recurse | Where-Object {!$_.psiscontainer } | g

et-hash }

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 3

Milliseconds      : 61

Ticks             : 30610419

TotalDays         : 3.54287256944444E-05

TotalHours        : 0.000850289416666667

TotalMinutes      : 0.051017365

TotalSeconds      : 3.0610419

TotalMilliseconds : 3061.0419

 

PS C:\> Measure-Command { dir c:\fso -Recurse | Get-Hash -ea 0 }

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 3

Milliseconds      : 280

Ticks             : 32803013

TotalDays         : 3.79664502314815E-05

TotalHours        : 0.000911194805555555

TotalMinutes      : 0.0546716883333333

TotalSeconds      : 3.2803013

TotalMilliseconds : 3280.3013

MO, that is all there is to using the Get-Hash cmdlet from the PSCX to obtain hash values from files in a folder. Security Week will continue tomorrow when I will talk about storing and comparing hash values to detect changes to files.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Use PowerShell to Compute MD5 Hashes and Find Changed Files

$
0
0

Summary: Learn how to use Windows PowerShell to compute MD5 hashes and find files changed in a folder.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a folder and I would like to detect if files within it have changed. I do not want to write a script to parse file sizes and dates modified because that seems to be a lot of work. Is there a way I can use an MD 5 hash to do this? Oh, by the way, I do have a reference folder that I can use.

—RS

Hey, Scripting Guy! Answer Hello RS,

Microsoft Scripting Guy, Ed Wilson, is here. Things are certainly beginning to get crazy. In addition to all of the normal end-of-the-year things going on around here, in addition to the middle of a product ship cycle, we are entering “conference season.” This morning, the Scripting Wife and I (along with a hitchhiker from the Charlotte Windows PowerShell User Group) load up the car and head to Atlanta, Georgia for TechStravaganza. We have the speaker’s dinner this evening, and tomorrow we will be flat out all day as the event kicks off. It will be a great day with one entire track devoted to Windows PowerShell. The following week, we head to Florida for a SQL Saturday, Microsoft TechEd, and IT Pro Camp. In fact, our Florida road trip begins with the monthly meeting of the Charlotte Windows PowerShell User Group (we actually leave for our trip from the group meeting). If you find all this a bit confusing, I do too. That is why I am glad we have the Scripting Community page, so I can keep track of everything.

Note   This is the fourth in a series of four Hey, Scripting Guy! blogs about using Windows PowerShell to facilitate security forensic analysis of a compromised computer system. The intent of the series is not to teach security forensics, but rather to illustrate how Windows PowerShell could be utilized to assist in such an inquiry. The first blog discussed using Windows PowerShell to capture and to analyze process and service information.  The second blog talked about using Windows PowerShell to save event logs in XML format and perform offline analysis. The third blog talked about computing MD5 hashes for files in a folder.

The easy way to spot a change

It is extremely easy to spot a changed file in a folder by making a simple addition to the technique discussed yesterday. In fact, it does not require writing a script. The trick is to use the Compare-Object cmdlet. In the image that follows, two folders reside beside one another. The Ref folder contains all original files and folders. The Changed folder contains the same content, with a minor addition made to the a.txt file.

Image of menus

After you import the PSCX, use the Compare-Object cmdlet to compare the hashes of the c:\ref folder with the hashes of the c:\changed folder. The basic command to compute the hashes of the files in each folder was discussed in yesterday’s blog. The chief difference here is the addition of the Compare-Object cmdlet. The command (a single logical command) is shown here.

PS C:\> Compare-Object -ReferenceObject (dir c:\ref -Recurse | Where-Object {!$_.psis

container } | get-hash) -differenceObject (dir c:\changed -Recurse | Where-Object {!$

_.psiscontainer } | get-hash)

The command and the associated output are shown here.

Image of command output

The command works because the Compare-Object cmdlet knows how to compare objects, and because the two Get-Hash commands return objects. The arrows indicate which object contains the changed objects. The first one exists only in the Difference object, and the second one only exists in the Reference object.

Find the changed file

Using the information from the previous command, I create a simple filter to return more information about the changed file. The easy way to do this is to highlight the hash, and place it in a Where-Object command (the ? is an alias for Where-Object). I know from yesterday’s blog, that the property containing the MD5 hash is called hashstring, and therefore, that is the property I look for. The command is shown here.

PS C:\> dir c:\changed -Recurse | Where-Object {!$_.psiscontainer } | get-hash | ? {

$_.hashstring -match 'DE1278022BF9A1A6CB6AAC0E5BEE1C5B'}

The command and the output from the command are shown in the image that follows.

Image of command output

Finding the differences in the files

I use essentially the same commands to find the differences between the two files. First, I make sure that I know the reference file that changed. Here is the command that I use for that:

PS C:\> dir c:\ref -Recurse | Where-Object {!$_.psiscontainer } | get-hash | ? { $_.h

ashstring -match '32B72AF6C2FF057E7C63C715449BFB6A'}

When I have ensured that it is, in fact, the a.txt file that has changed between the reference folder and the changed folder, I again use the Compare-Object cmdlet to compare the content of the two files. Here is the command I use to compare the two files:

PS C:\> Compare-Object -ReferenceObject (Get-Content C:\Ref\a.txt) -DifferenceObjec

(Get-Content C:\Changed\a.txt)

The image that follows illustrates the commands and the output associated with these commands.

Image of command output

RS, that is all there is to using finding modifications to files in folders when you have a reference folder.  Join me tomorrow for more cool stuff in the world of Windows PowerShell.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Modify File Access Time Stamps

$
0
0

Summary: Learn how to use Windows PowerShell to modify file creation and modification, and to access time stamps.

Hey, Scripting Guy! Question Hey, Scripting Guy! There are times that I would love to be able to manipulate file time stamps. I am talking about when they are created, changed, and accessed. I used to have a utility that did this for me on another operating system, but I have not been able to find something that will work with Windows PowerShell. Do you know of anything?

—TK

Hey, Scripting Guy! Answer Hello TK,

Microsoft Scripting Guy, Ed Wilson, is here. When I was writing all of the scripts for the Windows 7 Resource Kit, just before I uploaded everything to Mitch Tulloch, I would change the time stamps on all of the scripts. In this way, they all had the same time stamp, and it made for a simple type of version control. It was a simple command, and therefore, I did not write a script or function to do this.

Changing file attributes

The key to changing file attributes is two-fold. First, you must have permissions, and second, you need to realize that the attributes themselves are Read/Write. This second part is easy. Use the Get-Member cmdlet as shown here.

PS C:\> Get-Item C:\Changed\a.ps1 | gm -Name *time

   TypeName: System.IO.FileInfo

 

Name           MemberType Definition

----           ---------- ----------

CreationTime   Property   System.DateTime CreationTime {get;set;}

LastAccessTime Property   System.DateTime LastAccessTime {get;set;}

LastWriteTime  Property   System.DateTime LastWriteTime {get;set;}

As shown here, three properties end with the word Time. In addition, all three properties appear as get;set, meaning that the values are both retrievable and settable. To assign a new value to an attribute, you only need to a straightforward value assignment. In the code that is shown here, I use the Get-Item cmdlet to retrieve basic information about a text file named a.txt.

PS C:\> Get-Item C:\fso\a.txt

    Directory: C:\fso

 

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

-a---         6/12/2007   1:55 PM       3502 a.txt

One way to change the LastWriteTime property is to store the FileInfo object in a variable, and then use the equals operator to assign a new value to the property. In the code that follows, the Get-Item cmdlet retireves the FileInfo object for the a.txt text file. Then I assign a new value to the LastWriteTime property. The new value is the current date and time retrieved via the Get-Date cmdlet. Finally, the basic properties of the file display.

PS C:\> $a = Get-Item C:\fso\a.txt

PS C:\> $a.LastWriteTime = (get-date)

PS C:\> Get-Item c:\fso\a.txt

    Directory: C:\fso

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

-a---         5/31/2012  10:14 AM       3502 a.txt

Creating a function

To simplify the process of setting file time stamps, I created the following function. It accepts an array of file paths, and uses the current date and time for the new values. The Path parameter is a mandatory parameter. This portion of the function is shown here:

Param (

    [Parameter(mandatory=$true)]

    [string[]]$path,

    [datetime]$date = (Get-Date))

The main portion of the function uses the Get-ChildItem cmdlet to retrieve all files and folders in the current path. It does not use the Recurse switched parameter, but if you want to add it, you could. For my purposes, I do not want it to Recurse, so I left the switch off. Next, the objects pass to the Foreach-Object cmdlet, and the three time stamp properties change to the new value. Because the new date uses the [datetime] constraint, any value that Windows PowerShell interprets as a date/time value is acceptable. For example, the following command works on my system because Windows PowerShell is able to create a date from 7/1/11.

Set-FileTimeStamps -path C:\Ref -date 7/1/11

The complete Set-FileTimeStamps function is shown here:

Set-FileTimeStamps function

Function Set-FileTimeStamps

{

 Param (

    [Parameter(mandatory=$true)]

    [string[]]$path,

    [datetime]$date = (Get-Date))

    Get-ChildItem -Path $path |

    ForEach-Object {

     $_.CreationTime = $date

     $_.LastAccessTime = $date

     $_.LastWriteTime = $date }

} #end function Set-FileTimeStamps

TK, that is all there is to using Windows PowerShell to modify time stamps on files.  Join me tomorrow when we have a guest blog from Mike Robbins that talks about using Windows PowerShell with backups. It is a cool blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Weekend Scripter: Managing Symantec Backup Exec 2012 with PowerShell

$
0
0

Summary: Guest blogger, Mike F. Robbins, shows how to use Windows PowerShell to work with Backup Exec.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a new guest blogger, Mike F. Robbins. In this blog, Mike illustrates Windows PowerShell techniques for working with Symantec’s Backup Exec product.

Photo of Mike F Robbins

Mike F. Robbins is an MCITP | Windows PowerShell enthusiast | IT Pro | senior systems engineer who has worked on Windows Server, Hyper-V, SQL Server, Exchange Server, SharePoint, Active Directory, and EqualLogic storage area networks. He has over eighteen years of professional experience providing enterprise computing solutions for educational, financial, healthcare, and manufacturing customers.

Blog: http://mikefrobbins.com

Twitter: @mikefrobbins

For those of us who use Symantec Backup Exec in our datacenters, there has recently been a revolutionary breakthrough: the 2012 version adds Windows PowerShell support via a BEMCLI PowerShell module.

This blog is not meant to be a deep dive into Windows PowerShell or Backup Exec. I am going to walk you through how to perform some basic Backup Exec tasks with Windows PowerShell to give you an idea about how easy it is to manage without a GUI. You will see me pipe the output of several commands to the Select-Object cmdlet and others to the Out-Null cmdlet to reduce the number of items that are returned or to eliminate the output all together. By default, many of these cmdlets return a lot of items, which makes them output a list instead of a table.

Run an inventory to discover what backup tapes are in the tape drives:

Get-BETapeDriveDevice |

Submit-BEInventoryJob |

select Name, JobType, Schedule, Storage |

ft –auto

Image of command output

When the inventory completes, use the Get-BETapeDriveDevice cmdlet to retrieve the name of the backup tape in each tape drive. This cmdlet doesn’t return the media (tape) name by default.

Get-BETapeDriveDevice |

select Name, Media |

ft –auto

Image of command output

Perform a quick erase on the backup tape in each of the tape drives:

Get-BETapeDriveDevice |

Submit-BEEraseMediaJob |

select Name, JobType, Status, Schedule |

ft -auto

Image of command output

The following command starts both of the overwrite jobs that I’ve defined, which overwrites the backup tape in each tape drive. The Start-BEJob cmdlet doesn’t support wildcard characters, but you can use them with the Get-BEJob cmdlet and then pipe that cmdlet to Start-BEJob.

Get-BEJob -Name "o*" |

Start-BEJob |

Out-Null

Get a list of the backup jobs that failed with a status of error in the past 12 hours:

Get-BEJobHistory -JobStatus Error -FromStartTime (Get-Date).AddHours(-12) |

ft -auto

Image of command output

The Help that is provided with the cmdlets in this module is very thorough. All of the valid values for parameters such as the JobStatus parameter that I used in the previous command are listed in the Help:

help Get-BEJobHistory –Parameter JobStatus

Image of command output

Re-run the backup jobs that failed due to a status of error in the past 12 hours:

Get-BEJob -Name (Get-BEJobHistory -JobStatus Error -FromStartTime (Get-Date).AddHours(-12) |

select -expand name) |

Start-BEJob |

Out-Null

Image of command output

The Get-BEActiveJobDetail cmdlet returns a list of the backup jobs that are currently active (running), but I prefer to use the Get-BEJob cmdlet for this. I’ve included the jobs that have a status of “Ready”, which are waiting for a storage (backup) device to become available. If these backup jobs were being backed up to the tapes drives, the storage column would contain the tape drive name.

Get-BEJob -Status “Active”, “Ready” |

select Storage, Name, JobType, Status |

ft -auto

Image of command output

Cancel all of the active backup jobs:

Get-BEJob -Status "Active" |

Stop-BEJob |

ft –auto

Image of command output

Eject the backup tape from each of the tape drives:

Get-BETapeDriveDevice |

Submit-BEEjectMediaJob |

Out-Null

Image of command output

If you experience any issues getting the BEMCLI PowerShell module up and running, see the following blog that I wrote about a month ago; it covers a few issues I ran into:

Symantec Backup Exec 2012 Adds PowerShell Support!

Want to learn more about the Symantec Backup Exec BEMCLI PowerShell module? Download the Help file:

Backup Exec 2012 Management Command Line Interface (BEMCLI) Documentation

Windows PowerShell is quickly becoming an essential skill for IT Pros and a required product feature for IT vendors. It’s something that can be used to manage almost everything in your datacenter from a backup product (as shown in this blog) to a storage area network.

The script for this blog can be seen and downloaded from the Script Repository.

~Mike

Thank you, Mike, for sharing your blog and your time. It is cool to see how to use basic Windows PowerShell techniques while working with other products.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Lessons Learned from the 2012 Advanced Scripting Games

$
0
0

Summary: The winner of the advanced division of the 2012 Windows PowerShell Scripting Games gives tips for success.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a guest blogger, Rohn Edwards, the winner of the Advanced category in the 2012 Scripting Games. Rohn and Lido Paglia, winner of the Beginner category, will be joining us in Orlando next week. As part of winning the 2012 Scripting Games, they won passes to Microsoft TechEd 2012. Today Rohn is going to talk about lessons that he learned in the 2012 Scripting Games.

Rohn Edwards has been a system administrator since 2006. He primarily works on Windows operating systems. A lot of his work involves automated operating system and software deployment via Microsoft System Center Configuration Manager. He started learning Windows PowerShell about a year ago when he realized that it can do things that are not even possible in VBScript, and he has not looked back since.

Take it away, Rohn...

The 2012 Scripting Games have come and gone, and I’m very happy that I participated in them this year. I am not going to even try to mention every single thing that I picked up, but I do want to talk about a few of the biggest lessons that I learned.

Participation is worth it

I think one of the biggest things I learned is that the Scripting Games are definitely worth entering, no matter your current scripting level. As this year’s games came to a close, one thing that I repeatedly found myself thinking about was how I should have participated last year. At that point, I did not know nearly as much about Windows PowerShell as I did before the games this year. I was able to talk myself into thinking that I would not do very well in the rankings, and between work and family obligations, I wouldn’t have enough time to participate anyway.

After this year’s games, I now know two things about the games that I did not know last year:

  • The real reason for participating is to learn, and, even if you think you do not have time to participate, you should try anyway. Regardless of where you place in the final rankings, if you take the time to write and submit entries to at least some of the events, you will gain invaluable practice and experience with one of the most powerful tools the Windows operating systems have ever seen.
  • If you truly don’t have time to submit an entry for each event, you haven’t lost out on anything because the games are free to enter.

By not participating last year, I feel like I robbed myself of an incredible learning opportunity. The ten events, along with the expert commentaries and expert judges providing feedback on your scripts, cram months’ worth of scripting experience into just a few weeks’ time. So, if you’re reading this and wondering if the games are worth your time, the answer is an unequivocal, “Yes”!

Comment-based Help is ridiculously simple

Before starting the games, I knew that I could define Help by adding specially formatted comments at the beginning or end of functions. I had not, however, used that feature of Windows PowerShell. Before starting the games, I read the tips that Ed and some of the guest bloggers published on the blog. I kept seeing comment-based Help mentioned, so I assumed that every one of my functions in my entries needed to have it added. I am glad that I did add Help to each of my functions because it helped me realize how incredibly simple it is to do. Everyone knows that good commenting is a must if you want to reuse any of the code that you write. Before the Games this year, I provided a block comment at the beginning of a function or script that gave a brief overview of its purpose, what it took as input, and what it gave as output. That helped when someone was reading the code, but it didn’t provide any way for a user to understand how to use it. Instead of the simple block comment, now I simply provide comment-based Help. If someone comes across the comment in the code, they can read it the same reading it by using Get-Help. If not for the games, I probably would have continued to ignore the comment-based Help system for months.

Splatting

This may be one of my favorite Windows PowerShell features. A few weeks ago, I would have said that was still the newness of it affecting my judgment, but I still feel the same about it weeks after learning it and using it all the time. What is splatting? It is a way to pass parameters to another function or cmdlet by using a hash table. Here is an example of using Get-Service to get the number of services with a display name that starts with “Microsoft”:

Image of command output

In the previous example, I created a hash table called Parameters with three entries: ComputerName, DisplayName, and ErrorAction. I then “splatted” that hash table to the cmdlet and measured the number of objects returned. Next, I made the same call to Get-Service without splatting. Now, why is the first call better than the second? There are several reasons, but I’ll only list a few:

  1. It makes your code easier to read, especially if you are passing lots of parameters.
  2. It makes it easier to make a single call to a function or cmdlet, even when special logic is needed to avoid certain parameters being passed. For example, advanced event 2 required a function that would get service information for a specified computer or set of computers, and you had to support using alternate credentials. Because Get-Service does not support alternate credentials, most entrants chose to use the WMI class Win32_Service. Get-WMIObject (gwmi) supports using alternate credentials, but only when you use them against a remote machine; it doesn’t support alternate credentials on the local computer. 
    With splatting, you can create a hash table that contains the alternate credentials and any other parameters you want to pass, and before calling gwmi, you can check to see if you are going to run it against the local machine. If you are, you can remove the alternate credentials from the hash table. This lets you make a single call to gwmi inside of your code instead of having logic that will call it with a different set of hard-coded parameters, depending on what you need to do. I did not know about splatting when I wrote the bulk of my entry for Advanced Event 2, so my method was not nearly as elegant as it could have been if I had used it.
  3. It makes creating wrapper functions very easy. That is actually how I found out about this feature. I had finished my main function for Advanced Event 2, but I wanted a wrapper function that I could create that would call my main function so that the output could be written to a file. I needed a way to forward all the parameters that were sent to my wrapper function to the underlying function that it was calling. I did not want to write code to check for each parameter and send it along if it was present. After a few minutes searching online, I came across a blog post that demonstrated splatting. If you’re wondering, all you have to do to create a wrapper function is make a function that takes the same parameters as the function that you’re wrapping, and then call it and “splat” the auto-generated $PsBoundParameters hash table. If your wrapper function has parameters that the function it is wrapping does not have, all you have to do is make a copy of $PsBoundParameters and remove the unneeded parameters from the hash table.

RunSpacePools

This one blew my mind. I did not learn this myself while writing for any of the events. Expert Commentary: 2012 Scripting Games Advanced Event 2 by Boe Prox demoed this. His commentary pointed to a webcast by Dr. Tobias Weltner, Speeding up Windows PowerShell: Multithreading. So what is it, and why do I like it so much? Multithreading within Windows PowerShell gives you the ability to, among other things, run a command locally on your machine against lots of remote machines in a fraction of the time it would take to run that same command against the machines one at a time.

As long as you take thread safety into account, you should be able to use this feature to speed up any data parsing/processing that requires reading several files. PSJobs can do this to some extent, but RunSpacePools seem to offer more power, speed, and flexibility. The one area where I think PSJobs might beat RunSpacePools is simplicity. As long as you do not try to implement a limit to the number of threads you can run concurrently, it is much easier to follow what is going on when using PSJobs. After checking out the examples by Dr. Tobias Weltner and Boe Prox, and understanding what is going on, I think that RunSpacePools has earned a special place in my arsenal of Windows PowerShell tools.

So, was participating in the 2012 Scripting Games worth it? Yes! Would I still be glad that I participated if I did not win any prizes? Absolutely! I would recommend participating in the Scripting Games to anyone, whether they are completely new or an advanced pro when it comes to Windows PowerShell. The Games provide a chance to learn and enforce invaluable new and existing Windows PowerShell knowledge. The daily prize drawings and the grand prizes are just icing on the cake!

~Rohn

Thanks, Rohn! I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerShell in Depth: Part 1

$
0
0

Summary: In today’s blog, we have an excerpt from an upcoming book by Don Jones, Richard Siddaway, and Jeffrey Hicks.

Microsoft Scripting Guy, Ed Wilson, is here. Today and tomorrow we have a special treat for you. Candace Gillhoolley, from Manning Publications, sent me an excerpt from the upcoming book, PowerShell in Depth, which is written by Don Jones, Richard Siddaway, and Jeffery Hicks. 

Image of book cover

Here is Part 1...

There’s definitely a trick to creating reports with Windows PowerShell. Windows PowerShell isn’t at its best when it’s forced to work with text; objects are where it excels. This blog, based on Chapter 33 in PowerShell in Depth, focuses on a technique that can produce a nicely formatted HTML report, suitable for emailing to a boss or colleague.

Working with HTML Fragments and Files

Let us begin this blog with an example of what we think is a poor report-generating technique. We see code like this—sadly, more often than we would like. Most of the time, the IT Pro does not know any better and is simply perpetuating techniques from other languages such as VBScript. List 1, which we devoutly hope you will never run yourself, is a very common approach that you will see less-informed administrators take.

List 1: A poorly designed inventory report

param ($computername)

Write-Host '------- COMPUTER INFORMATION -------'

Write-Host "Computer Name: $computername"

 

$os = Get-WmiObject -Class Win32_OperatingSystem -ComputerName $computername

Write-Host "   OS Version: $($os.version)"

Write-Host "     OS Build: $($os.buildnumber)"

Write-Host " Service Pack: $($os.servicepackmajorversion)"

 

$cs = Get-WmiObject -Class Win32_ComputerSystem -ComputerName $computername

Write-Host "          RAM: $($cs.totalphysicalmemory)"

Write-Host " Manufacturer: $($cs.manufacturer)"

Write-Host "        Model: $($cd.model)"

Write-Host "   Processors: $($cs.numberofprocessors)"

 

$bios = Get-WmiObject -Class Win32_BIOS -ComputerName $computername

Write-Host "BIOS Serial: $($bios.serialnumber)"

 

Write-Host ''

Write-Host '------- DISK INFORMATION -------'

Get-WmiObject -Class Win32_LogicalDisk -Comp $computername -Filt 'drivetype=3' |

Select-Object @{n='Drive';e={$_.DeviceID}},

              @{n='Size(GB)';e={$_.Size / 1GB -as [int]}},

              @{n='FreeSpace(GB)';e={$_.freespace / 1GB -as [int]}} |

Format-Table -AutoSize

This produces a text-based inventory report something like the one shown here:

Image of command output

It does the job, we suppose, but Don has a saying that involves angry deities and puppies, which he utters whenever he sees a script that outputs pure text like this. First of all, this script can only produce output on the screen because it’s using Write-Host. In most cases, if you find yourself using only Write-Host, you are probably doing it wrong. Wouldn’t it be nice to have the option of putting this information into a file or creating an HTML page? Of course, you could achieve that by changing all of the Write-Host commands to Write-Output, but you still wouldn’t be doing things the right way.

There are a lot of better ways that you could produce such a report and that’s what this blog is all about. First, we would suggest building a function for each block of output that you want to produce, and having that function produce a single object that contains all of the information you need. The more you can modularize, the more you can reuse those blocks of code. Doing so would make that data available for other purposes, not only for your report. In our example of a poorly written report, the first section, Computer Information, would be implemented by some function that you would write. The Disk Information section is only sharing information from one source, so it’s actually not that bad off, but all of those Write commands simply have to go.

The trick to our technique lies in the fact that the ConvertTo-HTML cmdlet in Windows PowerShell can be used in two ways, which you’ll see if you examine its Help file. The first way produces a complete HTML page, and the second only produces an HTML fragment. That fragment is a table with whatever data you have fed the cmdlet. We’re going to produce each section of our report as a fragment, and then use the cmdlet to produce a complete HTML page that contains all of those fragments.

Getting the information

We will start by ensuring that we can get whatever data we need formed into an object. We will need one kind of object for each section of our report, so if we’re sticking with Computer Information and Disk Information, that’s two objects.

Note   For brevity and clarity, we are going to omit error handling and other niceties in this example. We would add those in a real-world environment.

Get-WmiObject by itself is capable of producing a single object that has all of the disk information we want, so we simply need to create a function to assemble the computer information. Here it is:

function Get-CSInfo {

  param($computername)

  $os = Get-WmiObject -Class Win32_OperatingSystem `

  -ComputerName $computername

 

  $cs = Get-WmiObject -Class Win32_ComputerSystem `

  -ComputerName $computername

 

  $bios = Get-WmiObject -Class Win32_BIOS `

  -ComputerName $computername

 

  $props = @{'ComputerName'=$computername

             'OS Version'=$os.version

                         'OS Build'=$os.buildnumber

                         'Service Pack'=$os.sevicepackmajorversion

                         'RAM'=$cs.totalphysicalmemory

                         'Processors'=$cs.numberofprocessors

                         'BIOS Serial'=$bios.serialnumber}

 

  $obj = New-Object -TypeName PSObject -Property $props

  Write-Output $obj

}

The function uses the Get-WMIObject cmdlet to retrieve information from three WMI classes on the specified computer. We always want to write objects to the pipeline, so we’re using New-Object to write a custom object to the pipeline by using a hash table of properties culled from the three WMI classes. Normally, we prefer property names to not have any spaces, but, because we’re going to be using this in a larger reporting context, we’ll bend the rules a bit.

Producing an HTML fragment

Now we can use our newly-created Get-CSInfo function to create an HTML fragment.

$frag1 = Get-CSInfo –computername SERVER2 |

ConvertTo-Html -As LIST -Fragment -PreContent '<h2>Computer Info</h2>' |

Out-String

This little trick took us a while to figure out, so it is worth examining.

  • We are saving the final HTML fragment into a variable named $frag1. That will let us capture the HTML content and later insert it into the final file.
  • We are running Get-CSInfo and giving it the computer name we want to inventory. For right now, we are hardcoding the SERVER2 computer name. We will change that to a parameter a bit later.
  • We are asking ConvertTo-HTML to display this information in a vertical list, rather than in a horizontal table, which is what it would do by default. The list will mimic the layout from the old, bad-way-of-doing-things report.
  • We used the PreContent switch to add a heading to this section of the report. We added the <h2> HTML tags so that the heading will stand out a bit.

The whole thing—and this was the tricky part—is piped to Out-String. You see, ConvertTo-HTML puts a bunch of things into the pipeline—strings, collections of strings, all kinds of wacky stuff. All of that will cause problems later when we try to assemble the final HTML page, so we’re using Out-String to resolve everything into plain old strings.

We can also produce the second fragment now. This is a bit easier because we don’t need to write our function first, but the HTML part will look substantially the same. In fact, the only real difference is that we are assembling our data in a table rather than as a list.

$frag2 = Get-WmiObject -Class Win32_LogicalDisk -Filter 'DriveType=3' `

         -ComputerName SERVER2 |

         Select-Object @{name='Drive';expression={$_.DeviceID}},

              @{name='Size(GB)';expresssion={$_.Size / 1GB -as [int]}},

              @{name='FreeSpace(GB)';expression={

              $_.freespace / 1GB -as [int]}} |

ConvertTo-Html -Fragment -PreContent '<h2>Disk Info</h2>' |

Out-String

We now have two HTML fragments, $frag1 and $frag2, so we are ready to assemble the final page. Join us tomorrow to see how this is accomplished.

~Don, Richard, and Jeffery

Thank-you Candace, Don, Richard, and Jeffery.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerShell in Depth: Part 2

$
0
0

Summary: In today’s blog, we continue our excerpt from the upcoming book by Don Jones, Richard Siddaway, and Jeffrey Hicks.

Microsoft Scripting Guy, Ed Wilson, is here. Yesterday, we posted PowerShell in Depth: Part 1, an excerpt provided by Candace Gillhoolley from Manning Publications. The book, PowerShell in Depth, is written by Don Jones, Richard Siddaway, and Jeffery Hicks.  Today we have the conclusion to that blog.

Image of book cover

There’s definitely a trick to creating reports with Windows PowerShell. Windows PowerShell isn’t at its best when it’s forced to work with text; objects are where it excels. This blog, based on Chapter 33 from PowerShell in Depth, focuses on a technique that can produce a nicely formatted HTML report, suitable for emailing to a boss or colleague.

Assembling the final HTML page

Assembling the final page simply involves adding our two existing fragments, although we are also going to embed a style sheet. Using cascading style sheet (CSS) language is a bit beyond the scope of this blog, but this example will give you a basic idea of what it can do. This embedded style sheet lets us control the formatting of the HTML page, so that it looks a little nicer. If you would like a good tutorial and reference to CSS, check out this CSS Tutorial on the w3schools.com site.

$head = @'

<style>

body { background-color:#dddddd;

       font-family:Tahoma;

       font-size:12pt; }

td, th { border:1px solid black;

         border-collapse:collapse; }

th { color:white;

     background-color:black; }

table, tr, td, th { padding: 2px; margin: 0px }

table { margin-left:50px; }

</style>

'@

 

ConvertTo-HTML -head $head -PostContent $frag1,$frag2 `

-PreContent "<h1>Hardware Inventory for SERVER2</h1>"

We have put that style sheet into the $head variable by using a here string to type out the entire CSS syntax that we wanted. That gets passed to the Head parameter, our HTML fragments to the PostContent parameter, and we couldn’t resist adding a header for the whole page, where we’ve again hardcoded a computer name (SERVER2).

We saved the entire script as C:\Good.ps1, and ran it like this:

./good > Report.htm

That directs the output HTML to Report.htm. This HTML report consists of multiple HTML fragments, which is incredibly beautiful as shown here:

Image of report

Okay, maybe it is no work of art, but it is highly functional, and frankly, it looks better than the on-screen-only report we started with in yesterday’s blog. List 2 shows the completed script, where we’ve swapped out the hard-coded computer name for a script-wide parameter that defaults to the local host. Notice too that we’ve included the [CmdletBinding()] declaration at the top of the script, which enables the Verbose parameter. We have used Write-Verbose to document what each step of the script is doing.

List 2: An HTML inventory report script

<#

.DESCRIPTION

Retrieves inventory information and produces HTML

.EXAMPLE

./Good > Report.htm

.PARAMETER

The name of a computer to query. The default is the local computer.

#>

 

[CmdletBinding()]

param([string]$computername=$env:computername)

 

# function to get computer system info

function Get-CSInfo {

  param($computername)

  $os = Get-WmiObject -Class Win32_OperatingSystem -ComputerName $computername

  $cs = Get-WmiObject -Class Win32_ComputerSystem -ComputerName $computername

  $bios = Get-WmiObject -Class Win32_BIOS -ComputerName $computername

  $props = @{'ComputerName'=$computername

             'OS Version'=$os.version

             'OS Build'=$os.buildnumber

             'Service Pack'=$os.sevicepackmajorversion

             'RAM'=$cs.totalphysicalmemory

             'Processors'=$cs.numberofprocessors

             'BIOS Serial'=$bios.serialnumber}

 

  $obj = New-Object -TypeName PSObject -Property $props

  Write-Output $obj

}

 

Write-Verbose 'Producing computer system info fragment'

$frag1 = Get-CSInfo -computername $computername |

ConvertTo-Html -As LIST -Fragment -PreContent '<h2>Computer Info</h2>' |

Out-String

 

Write-Verbose 'Producing disk info fragment'

$frag2 = Get-WmiObject -Class Win32_LogicalDisk -Filter 'DriveType=3' `

         -ComputerName $computername |

Select-Object @{name='Drive';expression={$_.DeviceID}},

              @{name='Size(GB)';expression={$_.Size / 1GB -as [int]}},

        @{name='FreeSpace(GB)';expression={$_.freespace / 1GB -as [int]}} |

ConvertTo-Html -Fragment -PreContent '<h2>Disk Info</h2>' |

Out-String

 

Write-Verbose 'Defining CSS'

$head = @'

<style>

body { background-color:#dddddd;

       font-family:Tahoma;

       font-size:12pt; }

td, th { border:1px solid black;

         border-collapse:collapse; }

th { color:white;

     background-color:black; }

table, tr, td, th { padding: 2px; margin: 0px }

table { margin-left:50px; }

</style>

'@

 

Write-Verbose 'Producing final HTML'

Write-Verbose 'Pipe this output to a file to save it'

ConvertTo-HTML -head $head -PostContent $frag1,$frag2 `

-PreContent "<h1>Hardware Inventory for $ComputerName</h1>"

 

Now that’s a script you can build upon! Using the script is very easy.

 

PS C:\> $computer = SERVER01

PS C:\> C:\Scripts\good.ps1 -computername $computer |

>> Out-File "$computer.html"

>> 

PS C:\> Invoke-Item "$computer.html"

The script runs, produces an output file for future reference, and displays the report. Keep in mind that our work to build the Get-CSInfo function is reusable. Because that function outputs an object, not only pure text, you could repurpose it in a variety of places where you might need the same information.

To add to this report, you would:

  1. Write a command or function that generates a single kind of object, which contains all the information you need for a new report section.
  2. Use that object to produce an HTML fragment, and store it in a variable.
  3. Add that new variable to the list of variables in the script’s last command, thus adding the new HTML fragment to the final report.
  4. Sit back and relax.

Yes, this report is text. Ultimately, every report will be because text is what humans read. The point of this one is that everything stays as Windows PowerShell-friendly objects until the last possible instance. We let Windows PowerShell, rather than our own fingers, format everything for us. The actual working bits of this script, which retrieve the information we need, could easily be copied and pasted and used elsewhere for other purposes. That was not as easy to do with our original pure-text report because the actual working code was embedded with all of that formatted text.

Building reports is certainly a common need for administrators, and Windows PowerShell is well suited to the task. The trick, we feel, is to produce reports in a way that makes the reports’ functional code (the bits that retrieve information and so forth) somewhat distinct from the formatting- and the output-creation code. In fact, Windows PowerShell is generally capable of delivering great formatting with very little work on your part, as long as you work it the way it needs you to.

~Don, Richard, and Jeffery

Thank you Candace, Don, Richard, and Jeffery.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Find Files Modified by Month and Year

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to find files modified by month and year.

Microsoft Scripting Guy, Ed Wilson, is here. I will admit it. I am not the best computer user in the world. In fact, when it comes to finding things on the Internet, the Scripting Wife often is faster than I am. In my office, often the Scripting Wife sits beside me and does whatever she does. At times, I will ask her to find something that I have wasted 15 minutes seeking, and she can find it almost immediately.

A case-in-point is Windows Search. I modified the indexing to include the full contents of my Windows PowerShell script files, and it works great. However, one thing I have not figured out used to be very easy in the Windows 95 days by using the old-fashioned Find utility—that is to find a file by date.

Here is the situation…

When the Scripting Wife and I were in Montreal, Canada last October, I remember writing some additional Windows PowerShell labs for the class. In fact, I spent each day teaching the class, the evenings with friends, and the nights writing new labs for the coming day’s class. After spending 15 minutes attempting to browse and search for these files, I finally gave up. I was unable to find my new lab files.

Use Windows PowerShell to find files by date

OK, so maybe I am not the world’s greatest computer user, but I know Windows PowerShell really well. The neat thing is that because I do know Windows PowerShell so well, I can compensate. For example, to find my missing lab files, I use the Get-ChildItem cmdlet and search my Data directory. In addition, although I cannot remember if the files have a .doc or the newer .docx file extension, it really does not matter. Because my Data folder is deeply nested, I need to do a recursive search. The following command returns all .doc and .docx files from the Data directory on my computer.

Get-ChildItem -Path C:\data -Recurse -Include *.doc,*.docx

The next thing is that I know I modified the file during the month of October in the year of 2011. The cool thing is that the LastWriteTime property is an instance of a System.DateTime object. The following script illustrates this.

PS C:\> Get-Item C:\fso\a.txt | gm -Name lastwritetime

   TypeName: System.IO.FileInfo

 

Name          MemberType Definition

----          ---------- ----------

LastWriteTime Property   System.DateTime LastWriteTime {get;set;}

Because the LastWriteTime property is an object, it means that it has a number of different properties. These properties are shown here.

PS C:\> (Get-Item C:\fso\a.txt).lastwritetime | gm -MemberType property

  

   TypeName: System.DateTime

Name        MemberType Definition

----        ---------- ----------

Date        Property   System.DateTime Date {get;}

Day         Property   System.Int32 Day {get;}

DayOfWeek   Property   System.DayOfWeek DayOfWeek {get;}

DayOfYear   Property   System.Int32 DayOfYear {get;}

Hour        Property   System.Int32 Hour {get;}

Kind        Property   System.DateTimeKind Kind {get;}

Millisecond Property   System.Int32 Millisecond {get;}

Minute      Property   System.Int32 Minute {get;}

Month       Property   System.Int32 Month {get;}

Second      Property   System.Int32 Second {get;}

Ticks       Property   System.Int64 Ticks {get;}

TimeOfDay   Property   System.TimeSpan TimeOfDay {get;}

Year        Property   System.Int32 Year {get;}

Based on the information detailed above, I can easily look at the month and the year that file modifications took place. Therefore, I need to use the Where-Object to examine the month and year properties of each file. This is a perfect place to use the “double-dotted” notation. The first dot returns a System.DateTime object. The second dot returns a specific property from that DateTime object. The following code first returns the month the file was last written, and the second example returns the year that the file was modified.

PS C:\> (Get-Item C:\fso\a.txt).lastwritetime.month

5

PS C:\> (Get-Item C:\fso\a.txt).lastwritetime.year

2012

Extract the specific properties to examine

By using the “double-dotted” technique that I discussed in the previous section, I can easily return only the files modified during October 2011. The trick is to use the $_ automatic variable to reference the current item in the Windows PowerShell pipeline as I pipe FileInfo objects from the Get-ChildItem cmdlet to the Where-Object cmdlet. In addition, because I need to find files modified in October, but I also need to find files that were modified in 2011, I need to use a compound Where-Object, and I need to use the –AND operator. This is because I want files that were modified in the month of October AND in the year of 2011. I could look for files that were either modified in the month of October OR were modified in the year of 2011, but that would not give me what I need. So here is the Where-Object I need to use:

Where-Object { $_.lastwritetime.month -eq 10 -AND $_.lastwritetime.year -eq 2011 }

The complete command is shown here:

Get-ChildItem -Path C:\data -Recurse -Include *.doc,*.docx |

Where-Object  { $_.lastwritetime.month -eq 10 -AND $_.lastwritetime.year -eq 2011 }

The command and the output associated with the command are shown in the image that follows.

Image of command output

The command is actually pretty fast. On my laptop, it takes a little more than six seconds to return the files I need. I used the following command to determine that bit of information:

PS C:\> measure-command {Get-ChildItem -Path C:\data -Recurse -Include *.doc,*.docx

 ? { $_.lastwritetime.month -eq 10 -AND $_.lastwritetime.year -eq 2011 }}

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 6

Milliseconds      : 147

Ticks             : 61477502

TotalDays         : 7.11545162037037E-05

TotalHours        : 0.00170770838888889

TotalMinutes      : 0.102462503333333

TotalSeconds      : 6.1477502

TotalMilliseconds : 6147.7502

So how many files is that? Well, it is more than 46,000 files. The following command tells me that:

PS C:\> Get-ChildItem -Path C:\data -Recurse | Measure-Object

Count    : 46425

Average  :

Sum      :

Maximum  :

Minimum  :

Property :

So what is my point?

If you get really good with Windows PowerShell, you improve more than just your system admin skills. Personally, I use Windows PowerShell every day; and these days, I do not do that much actual system administration. I love the Windows Search tool, and I am certain it could help me find files that I modified in October of last year. But it actually took me less than a minute to whip out the command, and it worked the very first time I ran it. Not only that, but the command is pretty fast as well. So given that I can find stuff in less than a minute, it does not really pay for me to spend hours trying to find out how to search by “date modified.” Windows PowerShell makes it easy. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


An Insider’s Guide to PowerShell Module Information

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses some of the great resources for learning about creating and using Windows PowerShell modules.

Microsoft Scripting Guy, Ed Wilson, is here. The Florida trip is nearly upon us. The Scripting Wife and I are busy packing and looking forward to a great week. We kick off the week in grand style—a Script Club meeting of the Charlotte Windows PowerShell User Group. We will be meeting in AP2 on the Charlotte Microsoft campus at 6:00 PM. If you are in the area we would love to see you. We then head to Pensacola for SQL Saturday, to Orlando for Microsoft TechEd, and then to Jacksonville for the Jacksonville IT Pro Camp. There are still seats available for the Jacksonville IT Pro Camp, so you should definitely check it out. I will be presenting in the afternoon, and there are some most excellent speakers who I am looking forward to hearing! I lived in Jacksonville for a couple of years, and there are some great things to do in the area. It is a great way to cap off an awesome week of learning following TechEd.

When I was speaking at the sold-out Atlanta TechStravaganga last weekend, during much of the question-and-answer portions of the presentations, I often answered a question with, “I have written an entire week of Hey, Scripting Guy! Blog posts about that subject.” At one point, Mark Schill interjected, “Ed’s written an entire week of blogs about nearly everything.”

Although it is not exactly true, there are 1,308 Hey, Scripting Guy! Blog posts about Windows PowerShell topics. One problem with having that much content is that people often do not know where to start. It used to be that you could use the Windows PowerShell tag, then select the Getting Started tag and begin reading. But now, even that filter returns nearly 300 Hey, Scripting Guy! Blog posts. At an average word count of 1500 words per blog, that is 450,000 words (the equivalent of two books the size of my Windows PowerShell 2.0 Best Practices). The results of the Windows PowerShell + Getting Started tags are shown in the image that follows.

It occurred to me that I need to create a guide to assist you to quickly find the Hey, Scripting Guy! Blog content you need. At a very high level, you should use the Study Guides from the Scripting Games, because they can help you find summary blogs about specific topics:

All about modules

So, without further ado, here is a guide to Hey, Scripting Guy! Blog posts that talk about modules.

Beginning with modules

I began 2010 with Module Week. This week incorporated some text from my book, Windows PowerShell 2.0 Best Practices (published by Microsoft Press).

Tell Me About Modules in Windows PowerShell 2.0   Provides an introduction about using the Get-Module and the Import-Module cmdlets. The blog also discusses how to locate and load modules.

How Can I Install Windows PowerShell Modules on Multiple Users' Computers?   Talks about what is required to install a Windows PowerShell module. It includes the Copy-Modules script that automates this process. The blog also talks about the psModulePath environmental variable.

How Do I Work with Windows PowerShell Module Paths?   Talks about creating a $modulepath variable and creating a Windows PowerShell drive to ease module management.

Can You Give All the Steps for Creating, Installing, and Using Windows PowerShell Modules?   (After you have read the three previous blogs, you will want to review this one.) This blog explores creating a function, adding it to a module, and then importing the module. This is a great summary of the process. In fact, at the end of the blog, the seven steps to create and use modules are detailed.

Simplify Desktop Configuration by Using a Shared PowerShell Module   Discusses placing a specific module in a shared location to simplify access. It uses the ConversionModuleV6 module as an example.

Don’t Write Scripts, Write PowerShell Modules   Discusses how a Windows PowerShell module is more flexible and more powerful than a simple Windows PowerShell script. I provide an example of combining the functionality of several Windows PowerShell scripts into a single module. Specifically, I am discussing deploying and configuring Windows 7. 

Combine PowerShell Modules to Avoid Writing Scripts   Explores using Windows PowerShell modules to save time and to reduce scripting overhead. Specifically, I talk about using the Windows 7 Library module from CodePlex as an example.

Don’t Write WMI Scripts, Use a PowerShell Module   Discusses using a module that is written by Windows PowerShell MVP, Richard Siddaway, to reduce the need to write WMI scripts. There is no need to continue writing the same old scripts over and over again, when a perfectly good WMI module has already been written.

Creating a module: Practical examples

One of the best ways to learn about anything is to watch an expert perform the task. This is true whether one is talking about learning how to create hand cut dovetails or learning to create Windows PowerShell modules. The following sections provide links to several module projects. Begin with the unit conversion module, then proceed to the local user management module, and then follow up with the WMI module. Even if you are not fascinated by the technology that is used for the specific example, the techniques are valid. It is just as if you were specifically interested in cutting dovetails in cherry wood—you can still learn a lot about dovetails by watching someone cut them in oak wood. 

The unit conversion module

In February 2010, I moved the Hey, Scripting Guy! Blog from five days a week to seven days a week. In honor of that occasion, I created a six part series that begin with a Windows PowerShell script that contains a collection of functions, and it emerges with an extremely useful module that performs dozens of unit conversions. I continue to use the Scripting Guys conversion module to this day. After you understand the basics of modules, you will want to go over this six-part module tutorial.

Conversion Module, Part 1   I take a Windows PowerShell script that contains a number of functions, and add them to a newly created module. I make basic changes to the functions, and then I install and test the newly created module. This blog is a great overview of the process of creating a module from an existing function.

Conversion Module, Part 2   I begin by adding descriptions to each of the functions in the module. I also add input and output parameters to the comment-based Help.

Conversion Module, Part 3   I create a new function called ConvertTo-Pounds. I discuss the design considerations for creating a function that accepts multiple inputs and multiple parameters.

Conversion Module, Part 4   I add a new function called ConvertTo-Liters. The blog discusses static methods from the System.Math class, in addition to the problem of working with multiple types of input.

Conversion Module, Part 5   I convert the output from the various functions so that they return a custom object instead of simply returning text. This change permits further work with the output without the requirement to parse text.

Conversion Module, Part 6   I talk about adding custom aliases for each of the functions in the module. In addition, I discuss using the Export-ModuleMember cmdlet.

The local user management module

The local user management module was developed and discussed in a series of four Hey, Scripting Guy! Blog posts:

The WMI module

The HSG WMI module was developed over six Hey, Scripting Guy! Blog posts. In each blog, I add additional functionality, and refinements.

A PowerShell WMI Helper Module Described   Discusses the creation and the use of a WMI helper module, including several functions such as Get-WmiClasses with qualifiers, Get-WmiClassMethods, and Get-WmiClassProperties.

Use PowerShell to Easily Find the Key Property of a WMI Class    Adds a function called Get-WmiKey that is useful when one needs to use the WMI type accelerator.

Use a PowerShell Function to Get WMI Key Property Values   Adds a function called Get-WmiKeyValue, which returns the values of property keys from a specified WMI class.

Easily Filter Empty WMI Properties in PowerShell   Adds a filter called HasWmiValue that is useful because it filters out property values that are empty or null. This makes exploring detailed WMI data very easy.

Query WMI Classes by Using PowerShell and Wildcard Patterns   The fifth version of the HSG WMI module adds a very cool function that allows you to use wild card characters to find a WMI class and to automatically query the classes that match that wild card pattern. This is an exceptionally powerful way to explore WMI data and to find the appropriate WMI class to solve a specific need.

Modify PowerShell Comment-Based Help to Display Function Use   Adds aliases and other clean-up details to the WMI module.

Module details

Checking for Module Dependencies in Windows PowerShell   When using modules in a script, or even from the Windows PowerShell console, you need to know if the module exists. If the module is not available, you need to notify the user in an appropriate manner. The Get-MyModule function in this blog is a useful approach for dealing with this situation.

Learn How to Load and Use PowerShell Snap-ins   Windows PowerShell MVP, Tome Tanasovski, discusses the differences between Import-Module, and Add-PSSnapin. He also covers the importance of a module manifest.  

Specific modules

Use a Free PowerShell Module to Ease DNS Administration   Chris Dent talks about his Windows PowerShell DNS module called DNS Shell. This powerful module can perform a number of tasks that include getting DNS zone information and creating DNS zones and records, in addition to a number of advanced tasks.

Clean Up Your PowerShell ISE Profile by Using a Module   This module includes a number of useful functions including Backup-Profile and Add-Help. It also discusses how to use a module to clean up a complex Windows PowerShell profile.

Use a Module to Simplify Your PowerShell Profile   This blog talks about storing your Windows PowerShell profile in a module, or even in a series of modules. It offers specific recommendations, including group similar functionality into a module.

The WMI permanent event consumer module is discussed by Trevor Sullivan in two blog posts. He illustrates the use of a WMI module to ease the task of creating permanent event consumers.

Use the PowerShell DHCP Module to Simplify DHCP Management   Jeremy Engel discusses creating the DHCP module, in addition to providing tips and tricks for its use.

Use PowerShell to Create VHDs for Hyper-V   Windows PowerShell MVP, Marco Shaw, discusses using the Hyper-V module.

The BSONPosh module was written by Windows PowerShell MVP Brandon Shell, and it consists of a number of extremely useful functions. I talked about this module in two Hey, Scripting Guy! Blog posts:

Use a PowerShell Module to Easily Export Excel Data to CSV   Jeremy Engel returns to the Hey, Scripting Guy! Blog to discuss his Microsoft Excel module. In this blog, he talks about two functions: Import-Excel and Export-Excel. This is a useful module that can simplify working with Windows PowerShell data and Microsoft Excel data.

Using Windows PowerShell to manage WSUS   Boe Prox wrote an entire series of blogs about this topic. In the following blogs, he discusses using the PoshWSUS module:

As you can see, there is a lot of help on the Hey, Scripting Guy! Blog for creating and using Windows PowerShell modules. To be honest, I had forgotten about some of these posts. I hope you save this page and come back to it again and again. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

An Insider’s Guide to Using WMI Events and PowerShell

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, reviews and discusses Hey, Scripting Guy! Blog posts about WMI events and Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Tickets for the Jacksonville IT Pro Camp are rapidly disappearing. If you are anywhere near Jacksonville, Florida on June 16, 2012, you definitely check it out. There are sessions about Windows PowerShell Best Practices, Windows PowerShell Remoting, and general Windows PowerShell admin. Not to mention sessions about Hyper_V, SharePoint, SQL Server, Team Foundation Server, and more. It will be an awesome opportunity to learn from some of the best people in the field. The presenters are Microsoft MVPs, Microsoft PFEs, community leaders, and of course, the Microsoft Scripting Guy.

Today, I want to look at the Hey, Scripting Guy! Blog posts that discuss WMI eventing.

All about WMI eventing

There are two types of WMI events: temporary event consumers and permanent event consumers. The temporary event consumers are events that you set up that will last until you exit the script. A good way to use a temporary event consumer is to start a script that watches something, make a change that changes what is being watched, and then when the event fires, you capture the event and do whatever you wanted to do. Permanent event consumers are written to the WMI repository, survive reboots, and run inside the WMI processes on your computer. There is no script to close, so they run as if they were services. These are used by SCOM and other applications. They are pretty complex, but they are not beyond the realm of people who are experienced in scripting and the use of WMI.

I have been writing about WMI eventing since back in the VBScript days. (In fact, I have an entire chapter about it in my WMI book). I have posts on the Hey, Scripting Guy! Blog from Windows PowerShell 1.0 days. These blogs are not necessarily obsolete because they talk about the underlying eventing .NET Framework classes. In addition, they illustrate the use of generic WMI event classes (the query is the same in Windows PowerShell 2.0 and Windows PowerShell 3.0).

There are two types of WMI event classes: implicit and generic. An implicit event class is really easy to use in Windows PowerShell 2.0 because the class already knows how to do events. A generic WMI event class is more difficult to use because the query is more complex. The nice thing is that the query is basically the same in Windows PowerShell and in VBScript, so you have tons of resources for these types of queries.

Temporary WMI event consumers

All of the following blogs talk about working with temporary WMI event consumers. These are typically set up to monitor a specific item, or items, for only a short period of time.

Using generic WMI classes

How Can I Be Notified When a Process Begins?   In this blog, I talk about using a generic WMI class, __InstanceCreationEvent, to monitor for a new process to begin. The blog was written in Windows PowerShell 1.0. It is valuable because I discuss the objects that are involved and show how to create a query by using a generic WMI event class. Please note that there is a Win32_ProcessStartTrace WMI class that makes it easier to monitor for a process to begin. The real value of the blog is in showing how to query a generic class.

How Do I Display a Message and the Time a Process Was Terminated?   This blog also uses a generic WMI class, __InstanceDeletionEvent, to monitor when a process goes away. Again, this blog is written in Windows PowerShell 1.0, so you will not need to manually create the EventWatcher class. In addition, there is a Win32_ProcessStopTrace WMI class that is an intrinsic event class. The value is in working with the generic WMI class.

How Can I Back Up a Database’s Data Folder While the Database Is Running?   This is an interesting post. It also uses a generic WMI event class, _InstanceModificationEvent. The query is very useful, as is the discussion of the technique actually involved. The technique illustrates doing something that is not easily accomplished via other methods. The script could be simplified by updating to Windows PowerShell 2.0, but this blog is still good.

Overview of writing an event driven script

How Can I Write an Event-Driven Script?   This post uses the Register-WmiEvent cmdlet that was introduced in Windows PowerShell 2.0. This blog illustrates the steps involved in writing an event-driven script, and it is a great introduction to the topic.

How Can I Be Notified When a USB Drive Is Plugged into My Computer?   This short overview talks about the difference between intrinsic and generic WMI event classes. It illustrates using the Register-WmiEvent and the Get-Event Windows PowerShell cmdlets. 

How Can I Retrieve Information About Laptops Changing from Full Power to Minimal Power Usage?   This blog follows the prior blog in a series, and it goes into more information about WMI events. The background information presented in the blog is useful for understanding the application of the techniques illustrated in the blog. The use of WMI eventing to monitor changes in power states is also a very good application of the technology.

Specific application useful examples

There are several Hey, Scripting Guy! Blog posts that illustrate using temporary event consumers in a variety of ways. These are intended as “food for thought” types of blogs, and not as specific monitoring solutions.

Can I Be Informed When a Portable Drive Is Added by My Computer?   This blog illustrates using the Register-WmiEvent and the Get_Event cmdlets. A temporary event consumer is created that monitors for plugging a USB drive into a computer. The script could be the basis of a more involved script, and the action you decide to take when the drive is inserted or removed is up to you. There are a number of great tips and tricks in this blog.

Can I Format a Portable Drive When It Is Inserted Into a Computer?   This blog builds on the previous blog, and it contains a number of extremely useful functions, such as the Test-IsAdministrator function that I wrote for the Windows 7 Resource Kit (admin rights are required to format a drive). The script associated with this blog is quite extensive, and this blog illustrates a number of tips and tricks for working with temporary WMI event consumers.

Can I Start an Event Based on When a Registry Value Is Changed?   This blog uses the RegistryValueChangeEvent WMI class to monitor a specific registry key and generate an event when a change takes place. The script also uses the Register-WmiEvent and the Wait-Event cmdlets that were introduced in Windows PowerShell 2.0. This is a great technique when you want to take an action when something in the registry changes. There is also a discussion about the other RegistryEvent WMI classes, and it includes a helpful table to let you know which class you might need.

Can I Use WMI to Determine When Someone Logs Off a User or Shuts Down a Server?   This blog uses the Register-WmiEvent cmdlet and the Get-Event cmdlet to monitor the Win32_ShutdownEvent intrinsic WMI class to provide notification. This is an interesting idea that can provide food for thought for some very useful applications.

WMI permanent event consumers

I have five blogs that discuss working with WMI permanent event consumers. The first two are foundational in that I discuss the technology and the parts that are involved in working with event consumers. The last three blogs were written by Trevor Sullivan, and he discusses a module he wrote to make working with permanent event consumers a bit easier.

Learn How to Use VBScript to Create Permanent WMI Events   This blog began the Permanent Event Consumer Week. It is foundational, and should be read because it explains the different pieces: the consumer, the event filter, and the filter to consumer binding. You should review this blog carefully if you want to work with WMI permanent event consumers.

Use PowerShell to Monitor and Respond to Events on Your Server   The series continues by discussing the different consumer classes. The blog has two extremely powerful scripts: the first script creates a permanent event consumer, the second script reports on permanent event consumers. The blog also illustrates how to remove the event consumers and event filters. The cmdlets used in this blog are Remove-WmiObject and Set-WmiInstance.

Use a PowerShell Module to Work with WMI Permanent Events   This blog is written by Trevor Sullivan, and it discusses using his PowerEvents module.

Use the PowerShell WMI Module to Quickly Monitor Events   This blog is the second part of using the PowerEvents module.

Monitor and Respond to Windows Power Events with PowerShell   In this blog, Trevor talks about using the Win32_PowermanagementEvent intrinsic eventing class and creating a permanent event consumer by using the PowerEvents module. If you do not have the module, you could modify the script in Use PowerShell to Monitor and Respond to Events on Your Server, but it will be more work, and the PowerEvents module works great.

These are the main Hey, Scripting Guy! Blog posts that discuss working with WMI events. I did not review a few others (for examples entries from the Scripting Games) here. To see all the blogs that come up about WMI events, simply click this tag cloud. Join me tomorrow when I will talk about more Windows PowerShell coolness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Use PowerShell to Change Computer Icon Caption to Computer Name

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to change the caption of the computer icon to local computer name. 

Changing desktop icon captions

Microsoft Scripting Guy, Ed Wilson, is here. The Charlotte Windows PowerShell User Group this past Thursday was a lot of fun. We decided to hold a mini Scripting Games. Brian thought up the event, and he also served as one of the judges (I was the other judge). The attendees had 45 minutes to develop their answers, and then they stood in front of the group and explained their script. Brian and I then provided our grades and our critiques. The critiques included what we liked and suggestions for further improvement. The event seemed more like “Scripting with the Stars” instead of the actual Scripting Games; but it was so much fun, and we decided to do it again for our meeting in July.

The Scripting Wife and I left directly from the Windows PowerShell User Group meeting and headed to Pensacola, Florida for SQL Saturday. I decided to let the Scripting Wife drive, and so I could spend time playing around with Windows PowerShell on my laptop. Sometimes when I am bored, I like to explore the Shell.Application object. This COM object is really cool, and it contains a ton of functionality. I decided it would be fun to change the caption of my Computer Icon on my desktop to display the name of my laptop instead of computer / my computer.

Use the Shell.Application object to access computer icon

Almost exactly two years ago, on June 8, 2010, I wrote a Hey, Scripting Guy! Blog in which I obtained a list of all the shell namespaces, the numeric value associated with that namespace, and the path to that namespace. The script takes a while to run; and therefore, it writes to a text file. The file contains over a thousand lines of data, and it is shown here.

Image of command output

This file tells me that the computer namespace is 17. I use a variable named $My_Computer to hold the number 17. This is not a requirement, but I like to do this because it avoids supplying a completely meaningless number to the Namespace method from the Shell.Application object. This value assignment is shown here.

$My_Computer = 17

I then create an instance of the Shell.Application COM object. To do this, I use the New-Object cmdlet, and use the ComObject parameter to supply the name of the object to create. One thing I like is that I do not need to place the object name inside a pair of quotation marks. I store the returning Shell.Application object in a variable that I call $shell. The following line of code accomplishes this task.

$Shell = new-object -comobject shell.application

Note   The other day, I had a discussion with a person who used to write a lot of VBScript code. This person is learning Windows PowerShell and was still using the Hungarian Notation for naming variables. In Hungarian Notation, you use a prefix like obj when the variable will contain an object. I pointed out (in a slightly facetious manner) that because everything is an object in Windows PowerShell, he would be using the obj prefix an awfully lot. In general, most Windows PowerShell scripters tend to avoid Hungarian Notation for their variable naming convention.

I now use the Shell.Application object that I stored in the $shell variable, and I call the Namespace method to obtain the computer object. I store the computer namespace in a variable that I call $NSComputer. This task is shown in the following line of code.

$NSComputer = $Shell.Namespace($My_Computer)

The last task is to set the Name property on the computer icon. To do this, I use the Self property from the computer namespace object that is stored in the $NSComputer variable. When I have the object returned by the Self property, I assign the new value to the Name property of that object. To obtain the name of my computer, I use the ComputerName environmental variable. The line of code that does this is shown here.

$NSComputer.self.name = $env:COMPUTERNAME

I called the four-line script, ChangeComputerIconName.ps1. The complete script is shown here.

ChangeComputerIconName.ps1

$My_Computer = 17

$Shell = new-object -comobject shell.application

$NSComputer = $Shell.Namespace($My_Computer)

$NSComputer.self.name = $env:COMPUTERNAME

The newly renamed computer icon now bears the name of my laptop. It is in the caption under the icon and it is shown here.

Image of icon

Well, it looks like traffic is becoming a bit heavy, and I am going to shut my laptop down for a while and help the Scripting Wife be on the lookout for squirrels.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Use PowerShell to Manage Windows Network Locations

$
0
0

Summary: Guest blogger, Microsoft PFE, Chris Wu, shows how to use Windows PowerShell to manage Windows network locations.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have another guest blog by my good friend Microsoft premier field engineer (PFE), Chris Wu.

Chris started his career at Microsoft in 2002, first as a support engineer in the Microsoft Global Technical Support Center in China to support the various components of the base operating system. Now he works as a premier field engineer (PFE) in Canada, and he specializes in platform and development support. During the course of troubleshooting, performance tuning, and debugging, he has created many utilities to ease and facilitate the process by leveraging various programming languages, such as C, C++, and C#. Windows PowerShell has become his new favorite.

Photo of Chris Wu

OK Chris, the keyboard is all yours…

Understanding Windows network locations

One feature that Windows 7 inherits from Windows Vista is network location awareness, which enables applications to sense changes to the network that the computer is connected to and then behave accordingly. Two applications in this feature are the network location-aware host firewall, and location-aware printing.

Two services play a key role in implementing the feature: the Network Location Awareness service and the Network List Service (which depends on the former). Collaboratively they keep a record of all the network locations that the computer has ever connected to, remember the associated location category (domain, or public vs. private per a user’s selection), apply Windows Firewall and default printer settings, and send network location awareness notifications.

In my opinion, network location awareness is a neat feature that is currently underutilized. Maybe partially it is because the managing interface is a little hard to reach.

It’s not difficult to open the Network and Sharing Center console in Control Panel to figure out the current network location name and location category (highlighted in yellow in the following image). Users can also click the category text (Public network in this case) to switch between public and private. (The domain location category is automatically applied.)

Image of menu

However, some users may fail to realize that the network icon (highlighted in red in the previous image) is also clickable, which leads to a Set Network Properties console. This console is shown in the image that follows.

Image of menu

This console gives users an option to change the network location’s name. In addition, it allows users to manage location history through a console, which is accessible through the Merge or delete network locations link. The Manage Location History console is shown in the following image.

Image of console 

Unfortunately, the Set Network Properties and Manage Location History consoles are not visible in Control Panel, which may leave some users out in the cold.

Using Windows PowerShell to manage network locations

Now, with the background knowledge about Windows network locations covered, it’s time to explore how to manage network locations by using Windows PowerShell.

The Network List Manager (NLM) infrastructure is exposed through the Network List Manager API, which is not easy to use directly from within Windows PowerShell scripts. Fortunately, a managed module is available on MSDN as part of Microsoft Windows Network List Manager Samples. Simply download the zip file and extract the Interop.NETWORKLIST.dll file from the obj\Debug folder, and we will be ready to roll.

Now that the prep work is complete, we see how we can do some common tasks quite easily. A few of the more common tasks are illustrated here.

Use the following script to get a list of network locations that the computer has ever connected to:

  Add-Type –Path .\Interop.NETWORKLIST.dll

  $nlm = new-object NETWORKLIST.NetworkListManagerClass

  $nlm.GetNetworks("NLM_ENUM_NETWORK_ALL") | select @{n="Name";e={$_.GetName()}},@{n="Category";e={$_.GetCategory()}},IsConnected,IsConnectedToInternet

Image of command output

The category property indicates if the network location is a Public, Private, or Domain network. Its value can be one of the NLM_NETWORK_CATEGORY enumerations (0 = Public, 1 = Private).

Use the following script to change the name of a disconnected network:

 $nlm.GetNetworks(2) | % { if ($_.GetName() –eq "EIA_FREE_WIFI") { $_.SetName("EIA") } }

Image of command output

In this example, we use an integer value instead of a string representation of NLM_ENUM_NETWORK enumeration as expected by the GetNetworks() method.

Use the following script to change a currently connected location’s category from public to private:

  $net = $nlm.GetNetworks("NLM_ENUM_NETWORK_CONNECTED") | select -first 1

  $net.SetCategory(1)

Image of command output

A computer may connect to multiple networks, so in the previous example, we use the Select-Object cmdlet to choose the first connected network location, and then change its category to 1 = Private.

Use the following script to list active connections:

  $nlm.GetNetworkConnections() | select @{n="Connectivity";e={$_.GetConnectivity()}}, @{n="DomainType";e={$_.GetDomainType()}}, @{n="Network";e={$_.GetNetwork().GetName()}}, IsConnectedToInternet,IsConnected

Image of command output

The previous command can list all active connections, which may be useful for applications to find out through which network the computer gets Internet connectivity.

Use the following script to register a ConnectivityChanged event:

  $nlm | gm –MemberType Event

  Register-ObjectEvent –InputObject $nlm –EventName ConnectivityChanged –Action { write-host "Connectivity Changed: $args" }

Image of command output

NLM allows applications to register for events. One example is the ConnectivityChanged event. Notice that the event passes NLM_CONNECTIVITY enumerations as the argument. In the previous image, 576 is a bitwise of two flags: NLM_CONNECTIVITY_IPV6_LOCALNETWORK and NLM_CONNECTIVITY_IPV4_INTERNET.

Use the following script to monitor a connectivity change per network:

  Register-ObjectEvent -InputObject $nlm -EventName NetworkConnectivityChanged -Action { try { if($nlm.GetNetwork($args[0]).GetName() -match "YOW") { write-host ("YOW is now:" + $args[1]) } } catch {} }

Image of command output

The NetworkConnectivityChanged event might be more useful than the ConnectivityChanged event, because it takes two arguments—one of which can tell us the network location of interest. Through this event, a Windows PowerShell script (preferably, it runs in the background) can possibly achieve real network location awareness. Imagine how exciting (or scary) it is to have a script that closes all productivity applications and launches Diablo 3 whenever your laptop is taken home.

Before concluding this blog, I would like to point out that the Interop.NETWORKLIST.dll library used in this blog serves as a .NET wrapper of NLM API, which are exposed as COM objects. They are accessible directly from Windows PowerShell, so it is not always necessary to use the DLL (although it does provide easier access to such features as events). For instance, this code snippet can also change the category of the current connected network location:

  $nlm2 = [Activator]::CreateInstance([Type]::GetTypeFromCLSID('DCB00C01-570F-4A9B-8D69-199FDBA5723B'))

  $net = $nlm2.GetNetworks(1) | select -first 1

  $net.SetCategory(1)

Image of command output

As useful as it is, the NLM API seems to have limitations too. After reading through its API reference, I still fail to find a way to delete a cached network location like we can do in the Merge or Delete Network Locations console. If anyone can point out a way, I would really appreciate it.

~Chris

Thank you, Chris!  This was a really interesting and instructive guest blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Add a Symbol to a Word Document by Using PowerShell

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to add a symbol to a Microsoft Word document.

Microsoft Scripting Guy, Ed Wilson, is here. The big day is finally here. Today we kick off Microsoft TechEd 2012 in Orlando, Florida. We have been planning our participation in this event for several months. The Scripting Wife, Daniel Cruz, and I are hosting the Scripting Guys booth this year. I also am doing two Birds-of-a-Feather sessions (one with Don Jones and the other with Jeffery Hicks). I also have an autograph session lined up. But, hey, you know all of that because I posted my schedule a couple of weeks ago. The Scripting Wife has been extremely busy these past several weeks managing our schedule, fielding invitations to various events that have been coming in via email, Twitter, and FaceBook. If not for her, I do not think I could manage a busy week like TechEd and still get anything else done. (But don’t tell her I said so or she will get the “big head.”)

Anyway, our road to TechEd actually began with the Windows PowerShell User Group meeting in Charlotte, North Carolina. We left there and went to Auburn, Alabama to spend the night. While we were driving to the hotel, I am positive I saw a panther in the weeds. Interestingly enough, when we were at breakfast, the server mentioned that someone had written a comment about seeing a panther—this was before I even told her I had seen one. A little search with Bing and I found that there are several unconfirmed reports of Florida panthers roaming around in Alabama. Cool! All we have in Charlotte are Carolina Panthers, and they are not nearly as fun to watch as Florida panthers might be.

As Teresa was driving on our way to Pensecola for SQL Saturday, I remembered an old Hey, Scripting Guy! Blog that I read about inserting “special” characters into a Word document. This is something I have needed to do in the past due to the need for diacritical characters in certain non-English words. But until now, I had never had the chance to play around with it.

The nice thing about using Windows PowerShell and Word automation is that almost anything you can do from inside the application can also be accomplished from outside the application via the automation model. It is not always easy, nor is it always consistent, but it is possible. I actually enjoy Word automation, and I have been using it for years. I have also written many Hey, Scripting Guy! Blogs about using Windows PowerShell to automate Microsoft Word.

To manually insert a symbol in Microsoft Word 2010, select the symbol from the Insert menu. This brings up the Symbol dialog box (incidentally this dialog box has not altered significantly through the years). You choose the font from the Font drop-down list, and you select the symbol from the panel in the middle of the dialog. The important thing for scripters is the character code that is shown in the image that follows.

Image of menu

The first character code utilized is 32. In the previous image, the first symbol selected is 32, and it is basically blank. The pencil (the next symbol beside the dark blue box) is character code 33. The scissors (next to the pencil) is character code 34. This pattern continues until you reach character code 255.  

The process to insert a symbol in a Microsoft Word document follows the same pattern as any other Windows PowerShell script that works with Microsoft Word. The first thing to do is create an instance of the Word.Application object. This object is a COM object, and therefore, I use the ComObject parameter of the New-Object cmdlet when creating the object. I store the returned Word.Application object in a variable that I call $word. This line of code is shown here.

$word = New-Object -ComObject word.application

Because I am basically experimenting, I make the Word application visible. If I was actually doing something where I knew what I was doing, and I was attempting to automate a common task, there would be no need to make the application visible. To make the Word application visible, I set the Visible property to $true. This line of code is shown here.

$word.visible = $true

Now that I have the Word application running and visible, it is time to add a document. Not very much is practical in Microsoft Word without a document with which to work. To add a document, I use the Add method from the Documents collection. This line of code is shown here.

$document = $word.documents.add()

There are two objects that have the InsertSymbol method: the Range object and the Selection object. I plan to use the Range object, so I use the Range method to retrieve an instance of the Range object from the Range object. This is shown here.

$range = $document.Range()

Lastly, I need to call the InsertSymbol method. This overloaded method accepts up to four parameters. Here, I only need to use two parameters: the character code (retrieved earlier) and the font name that hosts the character I want to insert. This line of code is shown here.

$range.insertsymbol(34,"wingdings")

The complete AddSymbolToWord.ps1 script is presented here.

AddSymbolToWord.ps1

$word = New-Object -ComObject word.application

$word.visible = $true

$document = $word.documents.add()

$range = $document.Range()

$range.insertsymbol(34,"wingdings")

When I run the script, it opens Microsoft Word, creates a new document, and adds the scissors symbol to the document. I added the red arrow and the blue box manually when I edited the picture (which frankly would be REALLY boring without my additions). The image is shown here.

Image of document

Well, that is all there is to using Windows PowerShell to add a symbol to a newly created Microsoft Word document. Microsoft Word Automation Week will continue tomorrow when I will talk about creating a Word document that contains inserts all of the characters from the Wingdings font.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

2012 Scripting Games Winners at TechEd 2012

$
0
0

Summary: The 2012 Scripting Games winners are at TechEd 2012.

Microsoft Scripting Guy Ed Wilson is here. Today is the official kick-off of Microsoft TechEd 2012. This morning, the Scripting Wife and I headed into the conference hall and met up with Lido and Rohn for breakfast. Here is a photo of Lido Paglia and Rohn Edwards, the winners of the 2012 Scripting Games.

Photo of Scripting Games winners

We were joined by Daniel Cruz, who is helping us out at the Scripting Guys booth. Last night we had an impromptu Windows PowerShell party with Microsoft MVPs, Don Jones, Sean Kearney, and Aleksandar Nikolic. Lido was also there, as was Jason Hofferle. (Jason will also be at the IT Pro Camp with us in Jacksonville on Saturday.) Of course, the party itself was sponsored by The Krewe, but whenever there are a few Windows PowerShell people in the same room, things always turn to a discussion of our favorite technology.

The Scripting Guys booth this year is awesome. We are sharing the space with my teammates from the Server & Cloud Division Information Experience. Our space is really cool with a couple of sofas, a table and stools, and of course the Script Center demo station. We worked really hard on the design for the booth. Here is the final product.

Photo of booth

The Scripting Guys booth itself is in the back of the Expo center in an area called the Connect Zone. We are just behind the Windows IT Pro exhibit, and beside the Microsoft Services team near Aisle 2300. Here is a picture of the Connect Zone area.

Photo of location flag

That is about it for now. I invite you to follow me on Twitter or Facebook. In fact, if you make a tweet, or posting on facebook there is a good chance I will respond back from my Windows 7 phone. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Scripting Guys Booth Visitors: Round One

$
0
0

Summary: Following the Microsoft TechEd 2012 keynote, the exhibit hall opened, and lots of people flooded in to talk to the Scripting Guys and friends.

Microsoft Scripting Guy, Ed Wilson, is here. We finally got a quick break. The first morning of Microsoft TechEd 2012 was a lot of fun. Our booth helper Daniel Cruz probably described it best when he said the following:

Wow, it was fun. It was a great way to spend the morning. I got to talk to people about Windows PowerShell who are actually interested in Windows PowerShell, instead of having to talk to people who just don’t get it.

One of the great things about Microsoft TechEd is getting to see people with whom you correspond, but with whom you never get to see, except for possibly at TechEd. Boe Prox and Microsoft MVP Sean Kearney stopped by the Scripting Guys booth, and they got to know Daniel a little better. Here are the three of them as they discuss—well, you know what they talked about; Windows PowerShell, of course!

Photo at TechEd 2012

I had the chance to talk to lots of people as they came by. Several people just wanted to shake hands and express their appreciation for everything, but some came with specific problems. It is always fun troubleshooting a Windows PowerShell script sight unseen without any specific error messages, but by pointing them to the Hey, Scripting Guy! Blog, and pointing them to the troubleshooting section, I hope that I got them thinking in a different vein.

I was talking to one person about writing a specific script to help him use Windows PowerShell to find items in Microsoft Outlook. I ended up thinking it was a great idea, took a bunch of notes, and I will probably write a Hey, Scripting Guy! Blog post about the particular scenario.

Well, I am going to ring off right now. Tonight, is the Expo Hall meet-and-greet, and if you are in Orlando at TechEd, come by the Scripting Guys booth to say, “Hi,” or bring your Windows PowerShell questions. We will have the winners of the 2012 Scripting Games there, in addition to a number of MVPs and others. So if you bring a question, you will definitely get answers.

The Scripting Wife will also be there, and she will be glad to talk to you about learning Windows PowerShell and resources for beginners. You can also catch her on Twitter @ScriptingWife. Here she is catching up on a few tweets between people at the booth.

Photo of Scripting Wife

Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 


Use PowerShell to Create a Character’s Word Document

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to create a Word document that displays all characters in a font.

Microsoft Scripting Guy, Ed Wilson, is here. If it is Tuesday, I must still be at TechEd 2012 in Orlando, Florida. In fact, this morning at 10:30 I have an autograph session at the O’Reilly booth. They will be handing out a few free copies of my Windows PowerShell 2.0 Best Practices book, so get in line early if you want to ensure success for scrounging one. If not…well, you can always buy one.

Note   This week I am talking about using Windows PowerShell to automate Microsoft Word. I have written extensively about this subject in the past. Today’s blog continues working with inserting special characters in a document. Yesterday, I talked about the basic technique to Add a Symbol to a Word Document by Using PowerShell.

Printing out all characters and codes from a font

When you work to automate Microsoft Word (whether you are using VBScript, Windows PowerShell, or some other language), many of the steps end up being exactly the same, no matter what you are doing. In yesterday’s blog, I discussed each of the steps. Today I am going to skip past the steps after I list the three basic steps.

Just the steps

  1. Create an instance of the Word.Application object. Store the returned object in a variable.
  2. Make the Word application visible.
  3. Add a document object to the Word application.

You will nearly always perform these same three steps, no matter what you are doing with Word automation. The code to perform these three steps is shown here.

$word = New-Object -ComObject word.application

$word.visible = $true

$document = $word.documents.add()

Creating and using a Selection object

In yesterday’s Hey, Scripting Guy! Blog, I mentioned that there are two objects that expose the InsertSymbol method. Those objects are the Range object and the Selection object. Yesterday, I used the Range object, so today I will use the Selection object. For this example, it really does not matter much which object exposes the method. I guess the point of the exercise is to show you that the method behaves the same, no matter which object exposes it.

To create a Selection object use the Selection property from the Document object, and store the returned Selection object in a variable. This technique is shown here.

$selection = $word.Selection

From yesterday’s blog, we also know that the characters range from 33 through 255. This is a perfect time to use the for statement because we know exactly how many items with which we will work. The following code uses the for statement to control a loop that goes from 33 through 255 and increments by 1.

For($i = 33 ; $i -le 255; $i++)

{

Inside the for loop, I use the TypeText method from the Selection object to write the number that is contained in the variable $i. The `t creates a tab character in Windows PowerShell. Therefore, the TypeText method displays a number and then tabs over one tab stop. The command is shown here.

$selection.TypeText("$i `t")

Now I need to use the InsertSymbol method to display the character that is associated with the specific character code represented by the number stored in the $i variable. In the code that follows, I use the font “Segoe” to illustrate that this technique works with any installed font.

$selection.InsertSymbol($i,"Segoe")

The TypeParagraph method creates an empty paragraph between each line that displays a character code and its associated symbol. Without the TypeParagraph method, the text would wrap line after line, and it would be difficult to read. The code to create the space between the lines of output is shown here.

$selection.TypeParagraph()

The complete CreateWordSymbolFile.ps1 script appears here.

CreateWordSymbolFile.ps1

$word = New-Object -ComObject word.application

$word.visible = $true

$document = $word.documents.add()

$selection = $word.Selection

For($i = 33 ; $i -le 255; $i++)

{

 $selection.TypeText("$i `t")

 $selection.InsertSymbol($i,"Segoe")

 $selection.TypeParagraph()

}

When I run the script, the output in the image that follows appears.

Image of command output

Microsoft Word Automation Week will continue tomorrow when I will talk about creating a table in a Microsoft Word document and inserting corresponding special characters in it. It is a really cool blog, and it illustrates several useful techniques.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Booth Night at TechNet 2012 Reviewed

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses TechNet 2012 Booth Night from Orlando Florida.

Microsoft Scripting Guy, Ed Wilson, is here. Well, “Booth Night” is officially over. We had an awesome time at the Scripting Guys booth last night at Microsoft TechEd 2012 in Orlando, Florida. We began our evening with a guest. Microsoft Windows PowerShell MVP, Don Jones, stopped by after his awesome session on Windows PowerShell remoting, and he fielded questions from a variety of people who came by the booth. Here is Don talking about Windows PowerShell with a few scripters.

Photo from TechEd

Following Don Jones, the two 2012 Scripting Games winners showed up to receive their Dr. Scripto bobble head dolls and some IT Pro posters, and to be interviewed by Microsoft IT Pro evangelist, Blaine Barton, for TechNet Radio IT Time. This interview will be live later, but I can tell you, it was a lot of fun. Blaine is a great guy, and he is the brains behind the Jacksonville IT Pro Camp coming up this Saturday. Anyway, here are Lido and Rohn loaded down with goodies following their interview with Blaine.

Photo from TechEd

Following the interview, we had various people stopping by to see how they might draw Dr. Scripto. Jason decided he could not resist the attempt at fame, and so he jumped-in on the competition. I thought he did an excellent job.

Photo from TechEd

I shared space with my teammates from SCDiX. Here are Cordell, Tony, and Kim hanging out after we closed down the exhibit hall.

Photo from TechEd

All in all, it was a great time, and there were lots of questions about Windows PowerShell, which gave rise to many impromptu demo’s on our workstation. I got at least a couple of great questions I will use for upcoming Hey, Scripting Guy! Blogs.

Enjoy today’s “regular” Hey, Scripting Guy! Blog, in addition to more impromptu blog posts during the day.

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

TechEd 2012 Tuesday Brings New Hope

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses Microsoft TechEd 2012 as of Tuesday.

Microsoft Scripting Guy, Ed Wilson, here. WOOHOO! It is TechEd 2012 Tuesday in Orlando, Florida! The Scripting Wife and I had breakfast with Jason, Will, Daniel, Lido, and Rohn. Beyond “How’s that coffee taste to you,” the conversation quickly turned to Windows PowerShell. We had a great discussion about the pros and cons (mostly cons) involved in making manual changes to critical systems. Here is Rohn elaborating on this point with Lido, the Scripting Wife, and Will hanging on his every word.

Photo at TechEd

Immediately after breakfast, I had my interview with Blaine Barton, the IT Pro technical evangelist, for the IT Time TechNet Radio pod cast. Following that, I had my book signing at the O’Reilly booth. Here I am signing a copy of my Windows PowerShell 2.0 Best Practices book for Blaine immediately after the interview.

Photo at TechEd

After that, Jeffrey Snover stopped by the Scripting Guys booth to chat with Lido Paglia and Rohn Edwards, the two winners of the 2012 Scripting Games. This particular event was not planned. But the Scripting Wife ran into Jeffrey last night at dinner, and he asked if he could come by and meet up with the two winners. This ended up being the highlight of the morning. A few quick tweets, and quickly a crowd gathered around. Here Jeffrey weaves his magic spell with a story about the early days of Windows PowerShell development.

Photo at TechEd

Following lunch, we had additional guests including Jeffery Hicks and Dr. Ferdinand Rios, who came by to talk about Windows PowerShell with people who were hanging out at the booth. Lido expressed the situation best when he said, “I am using the Scripting Guy booth as the hub for my TechEd experience. I go to sessions I am interested in, but I always cycle back by the booth because I know there will always be really cool people to talk to.” Yep, that’s us—the Scripting Guys booth where all the cool people hang out. If you are around, come hang out. Hope we see you tonight at the Scripting Guys table at Community Night from 6:15 PM to 9:00 PM in North Hall B. But dude…like…every day down here is community day, so Community Night at TechEd 2012 in Orlando, Florida is just icing on the cake. Come by, and say, “Hi.”

WOW! This just in! My O’Reilly editor just came by, and he said felt really bad because my book signing ended up turning people away—a LOT of people away. So somehow he managed to score another 23 copies of my Windows PowerShell 2.0 Best Practices book. So come by the O’Reilly booth at noon Wednesday (that’s tomorrow). Hint: You should probably arrive a bit early to ensure you can score an autographed copy.

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Create a Word Table that Shows Fonts

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to create a Word table that shows different fonts.

Microsoft Scripting Guy, Ed Wilson, is here. We are half finished with Microsoft TechEd 2012 in Orlando, Florida. Last night was Community Night, and it was great to host dozens of scripters at the Scripting Guys table with the Scripting Wife and Daniel Cruz. It was awesome to be able to talk with both beginning and advanced scripters. Actually, in some respects, beginners to Windows PowerShell have some advantage over people who have been doing it for a while. The reason is that they may see new techniques, and they are still asking, “What if…” as in “What if I do this?” I am still in “what if” mode. In fact, I have never left “what if” mode. I learn so much cool stuff by just experimenting, playing, and wondering what would happen if I did something as opposed to doing something else.

Today’s script is a case-in-point. The reason is that what should have been a relatively easy script, ended up morphing in to a script that took me nearly two hours to complete. The reason will become obvious later. Because I was offline when I wrote this script, I did not have access to the Bing search engine, and therefore, I could not “cheat” to see if someone else had seen the problem and arrived at a solution. Luckily, I know how to use the Get-Member cmdlet, and could examine the returning objects to find a solution. I also know how to read error messages (I actually get a lot of practice reading error messages). With the members of an object and an error message that tells me why something does not work, I do not need to “cheat” to get my script to work properly.

Creating a table in Word to display characters and codes 

The CreateWordSymbolTableFile.ps1 script is pretty long, but it has a lot of repetition. It could be made more compact, and it could probably be optimized, but I will leave that as an exercise for the reader (meaning that I am bored with this script now—it does what I need it to do, and I have other things to try).

The first thing I do is create two variables that I use later in the script when I create the table. The two variables hold the number of rows and the number of columns to use when creating the table. This is just like when you insert a table into a Word document—you specify the initial size of the table. You are not limited to that size because you can always add rows or columns to the table later. The initial settings only determine how the table will first appear. I usually try to ascertain how many columns I will need for my table, but I generally only add a couple of rows to the table because I do not always know how much data I will represent in the table. To save a bit of space in the script, I use a semicolon (;) to separate the two variable assignments and to permit me to use a single line. This line of code is shown here.

$rows = 2; $columns = 5

Now we come to the same three lines of code that generally inhabit most Microsoft Word automation scripts. For a discussion about these lines of code, see Monday’s blog Add a Symbol to a Word Document by Using PowerShell.

The three lines that I used to create the Word application object, make it visible, and add a document object to the application are shown here.

$word = New-Object -ComObject word.application

$word.visible = $true

$document = $word.documents.add()

Now I add a Range object to the script. Typically, you will work with a Range object or a Selection object. Yesterday, I used the Selection object, and on Monday, I used a Range object. Both of these objects expose the InsertSymbols method. The following line of code creates a new Range object, and stores the returned object in a variable named $range.

$range = $document.Range()

Now I need to add a table to the document. To do this, I use the Tables property to return a tables collection. From the tables collection, I use the Add method to add a new table to the document object that is stored in the $document variable. When adding a table, I need to supply the range object to host the table, the number of rows, and the number of columns. I stored the number of rows in the $rows variable and the number of columns in the $columns variable. When adding a table to the document, a Table object returns. I will use this object to work with the table later in the script. Therefore, I store the returned Table object in a variable named $table. This line of code is shown here.

$table = $document.Tables.add($range,$rows,$columns)

The next thing I need to do is use the newly created table and add the first row of text to serve as column headings. To do this, I use the Text property from the Range object to represent each cell in the first row of the table. I use the Cell method from the Table object to retrieve a specific cell in the table. The first number represents the row, and the second number is the column number. When I have the cell, I use the Range property to obtain the Range object for the specific cell. Adding text to the cell is as easy as assigning a value to the Text property. The code to add the first row of text that will be the column headings is shown here.

$table.cell(1,1).range.text = "code"

$table.cell(1,2).range.text = "wingdings"

$table.cell(1,3).range.text = "wingdings2"

$table.cell(1,4).range.text = "wingdings3"

$table.cell(1,5).range.text = "courier new"

With the first row of the table complete, it is time to add the remaining information to the table. The first thing to do is to create a new variable and initialize it equal to 2. This is because the first row contains the column headings, and therefore, all the new data to write to the table begins on the second row. I now use the for statement to create a loop that will create numbers 33 – 255 to use for the character codes that will be added to the table later in the script. I then use the Add method to add a new row to the table. I do not need to do anything with the row itself, so I pipe the returned Row object to the Out-Null cmdlet. This portion of the code is shown here.

$j=2

For($i = 33 ; $i -le 255; $i++)

{

 $table.rows.add() | out-null

The first column of the new row is easy. I use the Cell method to return the Cell object that represents the first column of the new row. I then use the Range property to return a Range object that represents that cell. Next I use the Text property of the Range object, and I assign the number stored in the $i variable to the cell. This code is shown here.

$Table.Cell($j,1).range.Text = $i

Now, things begin to get tricky. This is because I am unable to use the InsertSymbol method directly from the Range object that was returned from Cell object. It seems that I should be able to use a line of code such as the one shown here.

$Table.Cell($j,1).range.InsertSymbol($i,"WingDings")

Unfortunately, an error arises when I attempt to use the InsertSymbol method. The error states that the InsertSymbol method is unavailable because the object does not refer to a simple range or selection. The complete error appears here.

Exception calling "InsertSymbol" with "2" argument(s): "This method or property

 is not available because the object does not refer to simple range or selection."

At C:\data\ScriptingGuys\2012\HSG_6_11_12\CreateWordSymbolTableFile.ps1:27 char:38

+  $Table.Cell($j,1).range.InsertSymbol <<<< ($i,"WingDings")

    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException

    + FullyQualifiedErrorId : ComMethodTargetInvocation

It seems like I tried everything to work around the error. Eventually, I decided that my best approach would be to attempt to change the type of range that is returned by the Range property. To do this, I decided I would collapse the range by calling the Collapse method. This allowed me to reference a simple range and permitted me to make it past the error. Unfortunately, it also caused me to use three lines of code to do what I had hoped would only take a single line of code. Here are the three lines of code that obtain a range that represents a specific cell in the table, collapse the range, and use the range to insert a symbol into the cell.

$cellRange = $Table.Cell($j,2).range

 $cellRange.collapse()

 $cellRange.InsertSymbol($i,"WingDings")

When I confirmed that the three lines of code worked as desired, the remainder of the script was really copy, paste, and edit. The only thing that is not direct copy, paste, and edit is the little code $j++, which increments the value of the $j variable that is used to keep track of the row number for assigning the new values to the newly created cells in the new row. The remainder of the code is shown here.

$cellrange.insertSymbol($i,"wingdings2")

 $cellrange = $table.cell($j,4).range

 $cellrange.collapse()

 $cellrange.insertSymbol($i,"wingdings3")

 $cellrange = $table.cell($j,5).range

 $cellrange.collapse()

 $cellrange.insertSymbol($i,"Courier New")

 $j++

}

The complete CreateWordSymbolTableFile.ps1 script is shown here.

CreateWordSymbolTableFile.ps1

$rows = 2; $columns = 5

$word = New-Object -ComObject word.application

$word.visible = $true

$document = $word.documents.add()

$range = $document.Range()

 

$table = $document.Tables.add($range,$rows,$columns)

$table.cell(1,1).range.text = "code"

$table.cell(1,2).range.text = "wingdings"

$table.cell(1,3).range.text = "wingdings2"

$table.cell(1,4).range.text = "wingdings3"

$table.cell(1,5).range.text = "courier new"

$j=2

For($i = 33 ; $i -le 255; $i++)

{

 $table.rows.add() | out-null

 $Table.Cell($j,1).range.Text = $i

 $cellRange = $Table.Cell($j,2).range

 $cellRange.collapse()

 $cellRange.InsertSymbol($i,"WingDings")

 $cellrange = $table.cell($j,3).range

 $cellrange.collapse()

 $cellrange.insertSymbol($i,"wingdings2")

 $cellrange = $table.cell($j,4).range

 $cellrange.collapse()

 $cellrange.insertSymbol($i,"wingdings3")

 $cellrange = $table.cell($j,5).range

 $cellrange.collapse()

 $cellrange.insertSymbol($i,"Courier New")

 $j++

}

When the previous script runs, the output in the image that follows appears. One thing I learned from this exercise is that wingdings2, wingdings3, and “normal” fonts have a lot of overlap in the actual symbols that they contain. Looking at a document like this is a great way to compare multiple fonts to help you better make comparisons. With the large number of font files shipping in Microsoft Office products, this script can become a very useful tool.

Image of command output

Microsoft Word Automation Week will continue tomorrow when I will talk about more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Scripting Guys Wednesday at TechEd 2012 in Orlando

$
0
0

Summary: The Scripting Guys talk about what is going on today at TechEd 2012 in Orlando.

Well, last night's Community Night at Microsoft TechEd 2012 in Orlando, Florida was awesome. Our table was swamped from the time the gate opened until they turned the lights out and chased us from the room. Even then the conversations continued on from the Convention Center to the lobby of our hotel where we sat around and talked scripting into the wee hours of the morning. Indeed Windows PowerShell rocks! And people who know Windows PowerShell KNOW that it rocks. I did not take any pictures last night. I was too busy talking to the Scripting Gang to get my camera out. In fact, I did not even consider interrupting the proceedings with my flash.

This morning the Scripting Wife and I had our script breakfast with our scripting breakfast club. What did we have? Why Scripto’s of course, the cereal with mega byte! Today, I have my second autograph session for Windows PowerShell 2.0 Best Practices at the O’Reilly booth. Then at 3:15 this afternoon, I have the first of my two-birds-of-a-feather session about Windows PowerShell Best Practices. Today my co-host will be Microsoft PowerShell MVP, Don Jones. Tomorrow, I do another birds-of-a-feather session about Windows PowerShell Best Practices. But this time it will be with Microsoft PowerShell MVP, Jeffrey Hicks.

Well, that is about all for now. I will write more later today.

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>