Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Using the WMI Admin Tools to Check on Permanent Events

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use the WMI Administrative tools to check on Permanent WMI events created by Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. We continue to get new sponsors for the second Windows PowerShell Saturday event that will be held in Charlotte, North Carolina in the United States. The event will occur on September 15, 2012, and registration will be opening soon. I am impressed with the lineup of sponsors. There should be some great giveaways, but most importantly, there are going to be some GREAT speakers—including the Microsoft Scripting Guy (me). You will want to bookmark the PowerShell Saturday website because not only does it contain information about the Windows PowerShell Saturday event in Charlotte, but it will also have information about the event for October in Atlanta.

Note   This is the fourth blog in a five part series about monitoring a folder for the creation of files that have leading spaces in the file names. On Monday, I wrote Use PowerShell to Detect and Fix Files with Leading Spaces, the scripts from that blog will be used today and again on Friday. On Tuesday, I wrote Use PowerShell to Monitor for the Creation of New Files. This blog talks about creating a temporary WMI event to monitor for the creation of files in a particular folder (a query that is crucial to Friday’s blog). On Wednesday, I wrote about using a VBScript script to launch a Windows PowerShell script in How to Use VBScript to Run a PowerShell Script. The reason for this blog is that the WMI class that is used for the permanent event consumer uses a VBScript script and not a Windows PowerShell script. From a reference perspective, you should check out the An Insider’s Guide to Using WMI Events and PowerShell. This guide is a great reference, and it provides great assistance for understanding this powerful technology.

The first thing you need to do is to download and install the WMI Administrative Tools. The tools are an old HTML application with a very small file size (4.7 MB). In fact, the package is so small, I do not even save it locally. Rather, I run it from the download page. Installation is a simple click, click, and you are done. The only default change I make is that I make the application available for everyone and not just for the installer. The splash screen is shown here.

Image of menu

After you install the tools, you will find them in the WMI Tools folder on your start menu. There are two tools for working with WMI events. The first is the WMI Event Registration tool, and the second is the WMI Event Viewer. Because these are old HTML applications, they use Active X controls that are blocked by default. Therefore, you need to unblock the control before the tool becomes useful. The Allow blocked content message is shown here.

Image of menu

After you open the WMI Event Registration tool and allow the blocked content, you need to select the WMI namespace with which to work. The tool defaults to root\cimv2, but permanent events reside in the root\subscription WMI namespace, and so it is necessary to change that location to see the ActiveScriptEventConsumer. I also create the EventFilter in the root\subscription namespace, so it will not be necessary to switch WMI namespaces to see the EventFilter registration.

Note   Keep in mind that this is the WMI Event Registration tool, not the WMI Event Viewer tool. This means that you can edit, delete, and create WMI Event Registrations by using this tool. Unfortunately, there is no Read-only mode for this tool.

The following three things must be present and associated correctly for a permanent WMI event registration to work:

  1. An Event Consumer must be registered.
  2. An Event Filter must be registered.
  3. The Event Consumer must be associated with the Event Filter.

In the image that follows, the ActiveScriptEventConsumer appears in the root\subscription WMI namespace. Notice in the right pane, a green check mark appears next to the __EventFilter class with the instance name of “NewFile”. The green check mark appears under the column that states that it is registered. This image illustrates the Event Consumer to Event Filter binding.

Image of menu

To dig into the details of the ActiveScriptEventConsumer, right-click it in the WMI Event Registration pane. Check out the following:

  • The Script File Name. It should point to a VBScript file that is accessible to the event consumer.
  • If you are not using a script file, you can instead type the text of the script command in the Script text box. This is a great way to make a permanent event consumer portable (so that it does not rely on an external file).
  • The name of the ActiveScriptEventConsumer and the path and the relative path.

These properties are shown in the image that follows.

Image of menu

To review the Event Filter, use the Select arrow to choose Filters. Expand the __EventFilter node and ensure that the EventConsumerClass associates with the __EventFilter. To do this, look for the green check mark under the Reg column. In addition, make sure that the Instance name matches the name of the ActiveScriptEventConsumer detailed earlier. This result is shown here.

Image of menu

To check the properties of the __EventFilter, right-click __EventFilter in the left column, and then click Edit Instance Properties from the Action menu. From here, you will want to check the following items:

  • The event namespace
  • The name of the Event Filter
  • The query being utilized
  • The namespace of the event filter, in addition to the Path and the RelPath properties

These properties are shown in the image that follows.

Image of menu

When all three items related to permanent WMI events are checked, it is time to proceed to testing. This will be the subject of tomorrow’s blog.

That is all there is to using the WMI Administrative Tools to monitor for new WMI events. I invite you to join me tomorrow when I wrap up this five part series and discuss creating a permanent WMI event via a Windows PowerShell script that will monitor for new files created in a folder. If the file name has spaces at the beginning, it will automatically rename the file. It will be an exciting conclusion to an exciting week. So stay tuned, same script time, same script station (yes, the Scripting Wife and I went to see Batman). Take care.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Use PowerShell to Create a Permanent WMI Event to Launch a VBScript

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses creating a permanent WMI event registration to monitor for new files and clean up the file names.

Microsoft Scripting Guy, Ed Wilson, is here. I just booked the room for the Atlanta (Alpharetta) PowerShell Saturday. This will be PowerShell Saturday event #003, and it will be held on Saturday (of course) on October 27 at the Microsoft Office in Alpharetta, Georgia in the United States. The event is not even up on the PowerShell Saturday page yet, but I thought you might like to get it on your calendars. Of course, the PowerShell Saturday event in Charlotte, North Carolina page is up, as are the abstracts for the sponsors, the speakers, and presentations. Keep your eyes and ears open because the registration site will go live soon, and there are only 200 tickets available. PowerShell Saturday in Columbus Ohio sold out in 13 days, so you will need to be quick if you want to attend this high-profile event.

Creating a permanent WMI event to launch a VBScript…

…that launches a Windows PowerShell script…
…that cleans up a folder of file names with leading spaces upon their arrival…

Note  This is the fourth blog in a five part series about monitoring a folder for the creation of files that have leading spaces in the file names. On Monday, I wrote Use PowerShell to Detect and Fix Files with Leading Spaces, the scripts from that blog will be used today and again on Friday. On Tuesday, I wrote Use PowerShell to Monitor for the Creation of New Files. This blog talks about creating a temporary WMI event to monitor for the creation of files in a particular folder (a query that is crucial to Friday’s blog). On Wednesday, I wrote about using a VBScript script to launch a Windows PowerShell script in How to Use VBScript to Run a PowerShell Script. The reason for this blog is that the WMI class that is used for the permanent event consumer uses a VBScript script and not a Windows PowerShell script.

On Thursday, I took a step back and installed the WMI Administrative Tools, and I examined the parts of a permanent WMI event registration. The blog Using the WMI Admin Tools to Check on Permanent Events is a great tutorial. From a reference perspective, you should check out the An Insider’s Guide to Using WMI Events and PowerShell. This guide is a great reference, and it provides great assistance for understanding this powerful technology.

One thing you should monitor, if you will pardon the pun, when designing and implementing permanent WMI event registrations is the fact that they have a lot of moving parts, and they can be rather complicated. You must test your design and your implementation in a lab environment that closely emulates your actual production systems before implanting any of these techniques.

When I was creating the Windows PowerShell script for today’s blog, I actually ended up writing five separate scripts. The scripts are listed here. For ease of access, all five scripts are uploaded to the Script Center Script Repository.

  1. The first script is one that removes the permanent event registrations.
  2. The second script is a stripped down script to create my test files.
  3. The third script is the VBScript that is called by the permanent event registration.
  4. The fourth script is the Windows PowerShell script that is launched to clean up the files.
  5. The fifth script (the most complicated of all) is the one that does the actual WMI permanent event registration.

Avoid setting a short within value

When creating your WMI event query, make sure that you do not set a value of less than 30 (seconds) when going into production. It is common in testing, to set this value to 5 (seconds); but for production, never go less than 30 (seconds). Here is the WMI query that is used in the Create Permanent Event Consumer script.

$query = @"

 Select * from __InstanceCreationEvent within 30

 where targetInstance isa 'Cim_DirectoryContainsFile'

 and targetInstance.GroupComponent = 'Win32_Directory.Name="c:\\\\test"'

"@

Note   I discussed this query and the use of the Here-String for formatting the query in Use PowerShell to Monitor for the Creation of New Files.

What happens if you use within 5 in your query? Well, for one thing, Windows PowerShell polls every five seconds to see if there is a change. To see this behavior, I enabled the WMI-Activity trace log in the Event Viewer. One of the events is shown here.

Image of menu

To see the impact, of this, I used the following Windows PowerShell query to review these events.

Get-WinEvent -LogName *wmi-activity* -Force -Oldest | where { $_.id -eq 1 -AND $_.message -match 'select'} | select -Last 20 | ft timecreated, message –AutoSize

By using Windows PowerShell, I can easily see that the WMI query is executing every 5 seconds. (This is NOT the sort of thing you want to do on a heavily loaded production server.) The query and the results from the query are shown here.

Image of command output

Creating the three essential parts to the script

There are three essential parts to a permanent WMI event registration. These were discussed in yesterday’s Hey, Scripting Guy! Blog, Using the WMI Admin Tools to Check on Permanent Events. The first item required is the __EventFilter. The following code does this. (Keep in mind that the new instance of the __EventFilter is created in the root\subscription WMI namespace. But the arguments to this state that the EventNameSpace is in root\cimv2. The reason is that the class being used, Cim_DirectoryContainsFile, resides in root\cimv2.)

$filterPath = Set-WmiInstance -Class __EventFilter `

 -ComputerName $computer -Namespace $wmiNS -Arguments `

  @{name=$filterName; EventNameSpace=$filterNS; QueryLanguage="WQL";

    Query=$query}

The second part is the ActiveScriptEventConsumer. This portion of the script fills out the properties of the ActiveScriptEventConsumer. The three essential portions are the name of the consumer, the script file, and the script engine. Note that the only engine supported is the VBScript scripting engine.

$consumerPath = Set-WmiInstance -Class ActiveScriptEventConsumer `

 -ComputerName $computer -Namespace $wmiNS `

 -Arguments @{name="CleanupFileNames"; ScriptFileName=$scriptFileName;

  ScriptingEngine="VBScript"}

Finally, the last part is the __FilterToConsumerBinding. When this part is configured properly, the green check mark appears in the WMI Administrative Tools as shown yesterday. This portion of the script is really easy. All that is required is to bind the filter and the consumer together as shown here.

Set-WmiInstance -Class __FilterToConsumerBinding -ComputerName $computer `

  -Namespace $wmiNS -arguments @{Filter=$filterPath; Consumer=$consumerPath} |

  out-null

When the CreatePermenantEventToMonitorForNewFilesAndStartScript.ps1 script runs, no output appears. This is where using the WMI Administrative Tools comes in useful (see Using the WMI Admin Tools to Check on Permanent Events).

Now to test the script, I create some new files in my test folder by using the CreateTestFiles.ps1 script. The newly created files are shown here.

Image of menu

I have to move rather quickly, because I only have a maximum of 30 seconds before the event fires. Here is the cleaned up folder after the event fires.

Image of menu

Clean-up work

I have mentioned before, that when creating a script that makes changes to system state, it is always a good idea to also write a script to do the clean-up work. This is especially true when you are doing demos, or as an aid while you are composing the script. Here is my very simple clean-up script. The thing to keep in mind is that you MUST use a good filter to find your __EventFilter and your __FilterToConsumerBinding, or you will remove things your computer may very well need.

gwmi __eventFilter -namespace root\subscription -filter "name='NewFile'"| Remove-WmiObject

gwmi activeScriptEventConsumer -Namespace root\subscription | Remove-WmiObject

gwmi __filtertoconsumerbinding -Namespace root\subscription -Filter "Filter = ""__eventfilter.name='NewFile'"""  | Remove-WmiObject

Logging

I was actually hoping that the WMI-Activity trace log would let me know each time the VBScript ran, but alas, that was not the case. So I added a log to my Windows PowerShell clean-up script that writes the date to a log file. This line is shown here.

"called cleanup script $((get-date).tostring())" >>c:\fso\mylogging.txt

By adding this line to the clean-up files in my Windows PowerShell script, an entry writes to the log file each time the Windows PowerShell script is called from the VBScript.

This ends our WMI Events Week. Join me tomorrow when I will look at the differences in performance between using a literal WMI filter and a WMI wildcard filter. It should be pretty cool.  

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Measure the Performance of Using Wildcards in a WMI Query

$
0
0

Summary: Learn how to use the Windows PowerShell Measure-Command cmdlet to determine the performance of wildcard queries using WMI.

Microsoft Scripting Guy, Ed Wilson, is here. It is the weekend in Charlotte, North Carolina, and tomorrow I fly to Seattle, Washington where I speak at the Microsoft-only TechReady 15 conference. It is a great event, and I always have fun getting to see a lot of friends from all over the world. TechReady is a very international event, and it is the one time when Microsoftees have a chance to get together. Because the conference is held twice a year, it provides a chance for people to come to one session or the other, and for Microsoft to still be able to carry on with its normal activities.

This week, I had an idea based on a comment posted to one of my WMI articles, “What is the difference in performance between using the equality operator or using a wildcard character and the Like operator, when making a WMI query?” So I thought I would test it out.

Methodology

To measure the performance of different queries, the basic tool is the Measure-Command cmdlet. Because of potential caching issues, I reboot after each query. But before I do all that, I want to ensure that my WMI queries are working properly. For that, I do not need to reboot between commands. In addition, I want to ensure that my Measure-Command commands are working properly. So for that, I also test the commands. I ended up configuring three different query patterns. The patterns are listed here:

  1. Use the equality operator.
  2. Use the like operator and a wildcard character.
  3. Use the like operator and multiple wildcard characters.

When I am certain that my queries and Measure-Command commands work properly, it is time to begin the multiple reboot process.

Using the equality operator to find a process

My laptop is back up from the first reboot. As a baseline, I query for the explorer.exe process by using the equality operator. My expectations are that this will be the fastest query because it looks through all the processes and I am using the Name property to find the specific process. This is a non-indexed operation because the key to the Win32_Process class is handle and not the Name property. Here is the basic query.

Get-WmiObject -Class win32_process -Filter "name = 'explorer.exe'"

The command to measure the performance of the WMI query is shown here.

measure-command {Get-WmiObject -Class win32_process -Filter "name = 'explorer.exe'"}

So what are the results from the Measure-Command? They are shown here.

PS C:\> measure-command {Get-WmiObject -Class win32_process -Filter "name = 'explorer.exe'"}

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 0

Milliseconds      : 351

Ticks             : 3514032

TotalDays         : 4.06716666666667E-06

TotalHours        : 9.7612E-05

TotalMinutes      : 0.00585672

TotalSeconds      : 0.3514032

TotalMilliseconds : 351.4032

Using the Like operator and a wildcard to find a process

Now as a point of comparison, I use the percentage symbol ( % ), which is a wildcard character in WMI WQL that means zero or more instances of a character.

Get-WmiObject -Class win32_process -Filter "name LIKE 'explorer%'"

The command to measure the performance of the WMI wildcard query is shown here.

measure-command {Get-WmiObject -Class win32_process -Filter "name LIKE 'explorer%'"}

As I come out of my second reboot, I once again open the Windows PowerShell console and run the first of my wildcard comparison commands. The command and associated output are shown here.

PS C:\> measure-command {Get-WmiObject -Class win32_process -Filter "name LIKE 'explorer%'"}

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 0

Milliseconds      : 429

Ticks             : 4298977

TotalDays         : 4.97566782407407E-06

TotalHours        : 0.000119416027777778

TotalMinutes      : 0.00716496166666667

TotalSeconds      : 0.4298977

TotalMilliseconds : 429.8977

Use the Like operator and multiple wildcards to find a process

Does the amount and type of wildcard characters make any difference? I would suspect it would; but then, one never really knows. So here is a wildcard pattern that uses a range of letters, a single letter, and the zero or more letters character.

Get-WmiObject -Class win32_process -Filter "name LIKE '[A-F]xplo_er%'"

measure-command {Get-WmiObject -Class win32_process -Filter "name LIKE '[A-F]xplo_er%'"}

Now, I have completed my last reboot. It is time to see if there is a difference using the “wilder” wildcard pattern. The command and associated output appears here.

PS C:\Users\administrator> measure-command {Get-WmiObject -Class win32_process -Filter "name LIKE '[A-F]xplo_er%'"}

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 0

Milliseconds      : 339

Ticks             : 3391294

TotalDays         : 3.9251087962963E-06

TotalHours        : 9.42026111111111E-05

TotalMinutes      : 0.00565215666666667

TotalSeconds      : 0.3391294

TotalMilliseconds : 339.1294

Conclusions

Drawing conclusions from this little experiment is a little dangerous. The reason is that the Measure-Command cmdlet is not really accurate when it comes to measuring millisecond results. Therefore, making a hard and fast conclusion based upon millisecond results is not a best practice. Nevertheless, as a way of summarizing the results, following is a comparison table.

The equality operator found the information and returned results in 351 milliseconds, and that was faster than using the Like operator with a single wildcard. If the results had remained like that, I would have said, “Cool, it proves my point.” However, the Like operator with multiple wildcards returned in 339 milliseconds, and that is completely counter intuitive. Therefore, additional testing is indicated. To prove the point, the results need to take multiple seconds to return so we move into the area that is more accurate for the Measure-Command. At this stage of my testing, I would have to say, there is no difference between using the equality operator and using one or more wildcards. The following table illustrates the results.

Test

Time in Milliseconds

Equality operator

351

Like operator with single wildcard

429

Like operator with multiple wildcards

339

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Use PowerShell to Find and Explore .NET Framework Classes

$
0
0

SummaryThe Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to find and to explore .NET Framework classes.

PoshReflectionExplorer? Or not.

Microsoft Scripting Guy, Ed Wilson, is here. Well the day finally arrived. This morning the Scripting Wife dropped me off at the airport, and I begin my trek across the United States to Seattle Washington for the Microsoft internal conference, TechReady 15. I do not know if I have mentioned it or not, but there are 38 Windows PowerShell sessions going on this week at TechReady 15. Dude, I can tell you that I will have my work cut out for me attempting to see all of them. So I have my laptop running the latest build of Windows 8, and the customer preview of Office 2013, and I got a free upgrade to First Class, and 5 ½ hours of free time during the flight to enjoy. Sweet.

I seem to remember an email or a comment on a recent Hey, Scripting Guy! Blog post about exploring .NET Framework classes via Windows PowerShell. I wrote a Windows PowerShell script to do this very thing more than four years ago when I was working on the Windows PowerShell Scripting Guide book for Microsoft Press. I am not going to show you the script (which is a rather ugly Windows PowerShell 1.0 script), but I will show you the techniques that I used in the script to create my explorer.

 

First find the current appdomain

The first thing to do is to find the current appdomain. There is one used by the Windows PowerShell console, and a different one used for the Windows PowerShell ISE. To find the current appdomain, use the static currentdomain property from the system.appdomain .NET Framework class. (By the way, this works in Windows PowerShell 3.0 as well). First, the current appdomain for the Windows PowerShell console.

PS C:\> [appdomain]::CurrentDomain

 

FriendlyName           : DefaultDomain

Id                     : 1

ApplicationDescription :

BaseDirectory          : C:\WINDOWS\system32\WindowsPowerShell\v1.0\

DynamicDirectory       :

RelativeSearchPath     :

SetupInformation       : System.AppDomainSetup

ShadowCopyFiles        : False

 

Now, using the same command in the Windows PowerShell ISE, you can see different results.

PS C:\Users\ed.IAMMRED> [appdomain]::currentdomain

 

 

FriendlyName           : PowerShell_ISE.exe

Id                     : 1

ApplicationDescription :

BaseDirectory          : C:\Windows\system32\WindowsPowerShell\v1.0\

DynamicDirectory       :

RelativeSearchPath     :

SetupInformation       : System.AppDomainSetup

ShadowCopyFiles        : False

 

The currentdomain static property returns a system.appdomain object. This object contains a number of methods in addition to the displayed properties. I can find this information by piping the results from the currentdomain static property to the Get-Member cmdlet. This command is shown here.

[appdomain]::CurrentDomain | get-member

The method I want to use is the getassemblies method. The getassemblies method is not a static method, but because the currentdomain static property returns a system.appdomain object. I can call the method directly from that object. Here is the command and associated output from the Windows PowerShell console (on a Windows PowerShell 2.0 machine. In Windows PowerShell 3.0, the versions are all v4.0.xxxxx).

PS C:\> [appdomain]::currentdomain.GetAssemblies()

 

GAC    Version        Location

---    -------        --------

True   v2.0.50727     C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorlib.dll

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.PowerShell.ConsoleHo...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e0...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Management.Automation\1...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.PowerShell.Commands....

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Core\3.5.0.0__b77a5c561...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Configuration.Install\2...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.WSMan.Management\1.0...

True   v2.0.50727     C:\Windows\assembly\GAC_64\System.Transactions\2.0.0.0__b77...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.PowerShell.Commands....

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.PowerShell.Commands....

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\Microsoft.PowerShell.Security\...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c5619...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Management\2.0.0.0__b03...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.DirectoryServices\2.0.0...

True   v2.0.50727     C:\Windows\assembly\GAC_64\System.Data\2.0.0.0__b77a5c56193...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Configuration\2.0.0.0__...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Security\2.0.0.0__b03f5...

True   v2.0.50727     C:\Windows\assembly\GAC_MSIL\System.Data.SqlXml\2.0.0.0__b7...

The getassemblies method returns instances of the System.Reflection.Assembly .NET Framework class. This class contains a number of very interesting methods and properties. The output from Get-Member on the returned system.reflection.assembly .NET framework class is shown here.

PS C:\> [appdomain]::currentdomain.GetAssemblies() | Get-Member

 

   TypeName: System.Reflection.Assembly

 

Name                      MemberType Definition

----                      ---------- ----------

ModuleResolve             Event      System.Reflection.ModuleResolveEventHandler ...

CreateInstance            Method     System.Object CreateInstance(string typeName...

Equals                    Method     bool Equals(System.Object o)

GetCustomAttributes       Method     System.Object[] GetCustomAttributes(bool inh...

GetExportedTypes          Method     type[] GetExportedTypes()

GetFile                   Method     System.IO.FileStream GetFile(string name)

GetFiles                  Method     System.IO.FileStream[] GetFiles(), System.IO...

GetHashCode               Method     int GetHashCode()

GetLoadedModules          Method     System.Reflection.Module[] GetLoadedModules(...

GetManifestResourceInfo   Method     System.Reflection.ManifestResourceInfo GetMa...

GetManifestResourceNames  Method     string[] GetManifestResourceNames()

GetManifestResourceStream Method     System.IO.Stream GetManifestResourceStream(t...

GetModule                 Method     System.Reflection.Module GetModule(string name)

GetModules                Method     System.Reflection.Module[] GetModules(), Sys...

GetName                   Method     System.Reflection.AssemblyName GetName(), Sy...

GetObjectData             Method     System.Void GetObjectData(System.Runtime.Ser...

GetReferencedAssemblies   Method     System.Reflection.AssemblyName[] GetReferenc...

GetSatelliteAssembly      Method     System.Reflection.Assembly GetSatelliteAssem...

GetType                   Method     type GetType(string name), type GetType(stri...

GetTypes                  Method     type[] GetTypes()

IsDefined                 Method     bool IsDefined(type attributeType, bool inhe...

LoadModule                Method     System.Reflection.Module LoadModule(string m...

ToString                  Method     string ToString()

CodeBase                  Property   System.String CodeBase {get;}

EntryPoint                Property   System.Reflection.MethodInfo EntryPoint {get;}

EscapedCodeBase           Property   System.String EscapedCodeBase {get;}

Evidence                  Property   System.Security.Policy.Evidence Evidence {get;}

FullName                  Property   System.String FullName {get;}

GlobalAssemblyCache       Property   System.Boolean GlobalAssemblyCache {get;}

HostContext               Property   System.Int64 HostContext {get;}

ImageRuntimeVersion       Property   System.String ImageRuntimeVersion {get;}

Location                  Property   System.String Location {get;}

ManifestModule            Property   System.Reflection.Module ManifestModule {get;}

ReflectionOnly            Property   System.Boolean ReflectionOnly {get;}

For instance, one thing you might be interested in finding out is if the assembly resides in the Global Assembly Cache (GAC). In the Windows PowerShell 2.0 console, all assemblies are in fact in the GAC. But in the Windows PowerShell 2.0 ISE, and in the Windows PowerShell 3.0 console, this is not the case. If you find yourself using an assembly very often, you might want the assembly in the GAC. Here is how to find assemblies from the current appdomain that are not in the GAC.

PS C:\> [appdomain]::currentdomain.GetAssemblies() | where {!($_.globalassemblycache)}

 

GAC    Version        Location                                                                     

---    -------        --------                                                                      

False  v2.0.50727     C:\Windows\system32\WindowsPowerShell\v1.0\PowerShell_ISE.exe                

False  v2.0.50727     C:\Windows\system32\WindowsPowerShell\v1.0\CompiledComposition.Microsoft.Po...

 

Each loaded .NET Framework assembly contributes .NET Framework classes. To see the classes exposed by the assembly, you can use the gettypes method from the System.Reflection.Assembly class returned by the GetAssemblies method from the appdomain class. As you might expect, there are numerous .NET Framework classes. Interestingly enough, the more filter does not appear to work consistently when working interactively via the Windows PowerShell console, and it does not work at all in the Windows PowerShell ISE. So you might want to consider redirecting the output to a text file. One thing that will help is to sort the output by basetype. Here is the command to do that.

PS C:\> [appdomain]::currentdomain.GetAssemblies() | Foreach-Object {$_.gettypes()} | sort basetype

Do not expect to quickly find exotic, little known, unused .NET Framework classes. Most of the output, for the IT Pro will be rather pedestrian, lots of error classes, lots of enum, lots of structures, and the like. The output headings appear here:

IsPublic IsSerial Name                                     BaseType                

-------- -------- ----                                              --------      

The first couple of pages of output do not event list a base type. Then, when we get to the first grouping of types that do expose a base type, the output is disappointing. Here are the first three lines from that section.

False    True     ModuleLoadExceptionHandlerException      <CrtImplementationDeta...

False    False    CSharpMemberAttributeConverter           Microsoft.CSharp.CShar...

False    False    CSharpTypeAttributeConverter             Microsoft.CSharp.CShar...

False    False    WmiAsyncCmdletHelper                     Microsoft.PowerShell.C...

What is going on here? Remember that last year, I wrote a Hey Scripting Guy! blog entitled, Change a PowerShell Preference Variable to Reveal Hidden Data. Well, if you do not remember it, don’t worry, I did not remember the title either. But I did a search for preference variables, and I found it on the first try. Basically, what you need to do is change the $FormatEnumerationLimit preference variable. By default, the enumeration limit value is 4; and so after four items. it does not use any more space. I like to change it to 20.

But unfortunately, this does not solve the problem. The problem here is that the .NET Framework class names are extremely long...in some cases, really long. Therefore, using the basic redirection arrow does not help capture all the output. In this case, you need to move beyond the defaults and specify a custom width for the output. The best way to do this is to use the Out-File cmdlet. By setting the width to 180, you will capture most (but not all) of the really long .NET Framework class names. (Each time you make the file wider, you also increase the file size and make the file a bit more difficult to use.) For example, a width of 500 characters will create a file about 8 MB in size. A width of 180 will be around 3.5 MB in size (with over 10,000 lines in it). Here is the command I used.

PS C:\> [appdomain]::currentdomain.GetAssemblies() | % {$_.gettypes()} | sort basetype | Out-File -FilePath c:\fso\gettypes.txt -Width 180 –Append

Now that you have the list, you can peruse it at your leisure. Use Get-Member or MSDN to help you find things. I can tell you, from experience that it can spend a very long time looking through stuff. Have fun, and I will talk to you on Monday.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

An Introduction to PowerShell Remoting: Part One

$
0
0

Summary: Guest blogger, Jason Hofferle, talks about the basics of Windows PowerShell remoting.

Microsoft Scripting Guy, Ed Wilson, is here. This week I am in Seattle, Washington presenting at Microsoft TechReady 15. I have been talking to Jason for some time, and I thought that now would be a great chance to share some of his insights with us. This is the first in a series of five blogs by Jason.

Photo of Jason Hofferle

Jason Hofferle has been an IT professional since 1997. His experience includes enterprise desktop engineering for a Fortune 500 financial institution and Active Directory design for local governments and law enforcement. Jason currently works for the Defense Contract Management Agency, where he implements new technology such as virtual desktop infrastructure. He recently has been speaking about Windows PowerShell at SQL Saturday and IT Pro Camp events in Florida, and he frequently attends the Tampa PowerShell User Group.

Blog: Force Multiplication through IT Automation
Twitter: @jhofferle

As Windows PowerShell enthusiasts go, I might be considered a late adopter. I’ve kept tabs on Windows PowerShell since it was called Monad, but it wasn’t until it was included with the Windows 7 operating system that I really started using it on a consistent basis. When version 2 was released, it included some game-changing features that convinced me Windows PowerShell was the future. One of these features is PowerShell Remoting, which allows me to run commands on a remote system as if I was sitting in front of it. It provides a consistent framework for managing computers across a network.

When I start explaining PowerShell Remoting to others, sometimes the initial reaction is, “Big deal,” because there are already many techniques available for working with remote computers. We have Windows Management Instrumentation (WMI), which is commonly used with VBScript. We have executables from resource kits or non-Microsoft tools that allow remote management, for example, the Sysinternals PSExec tools. Even many of the Windows PowerShell cmdlets have a ComputerName parameter to specify remote computers. So how does PowerShell Remoting differ from the capabilities we already have? Why should someone go through the trouble of enabling this feature when so many tools are available that don’t have a dependency on PowerShell Remoting?

Many of these methods have their downsides. First of all, there’s no consistency between utilities. One command may require parameters with a slash, the next wants a slash with a colon, and many handle quotation marks differently than others. The knowledge gained from learning one tool doesn’t transfer to another, so when I need to perform a different administrative task, I need to read through documentation and deal with the quirks of a new utility.

Another issue is that many use distributed COM (DCOM) or remote procedure call (RPC) to connect to remote systems. This may work well on a single internal network, but it causes problems when these tools need to traverse firewalls or play nice with intrusion prevention or other security systems. I don’t know too many firewall administrators who want to open up RPC ports. Finally, existing tools sometimes work differently depending on if a command is being run locally or remotely. I’ve had several occasions using WMI with VBScript where something is working perfectly on my local system, but it fails miserably when I try it on a remote computer because that particular application programming interface (API) can only be used locally. Wouldn’t it be nice if we could have consistent management commands that worked no matter where they were being run?

PowerShell Remoting is a solution to some of the security and consistency issues that IT professionals currently work around. It’s built on Microsoft’s implementation of the Web Services for Management (WSMan) protocol, and it uses the Windows Remote Management (WinRM) service to manage communication and authentication. This framework was designed to be a secure and reliable method for managing computers that’s built on well-known standards like Simple Object Access Protocol (SOAP) and Hypertext Transfer Protocol (HTTP).

Unlike utilities that use various programming interfaces to talk to a remote computer, PowerShell Remoting connects my local Windows PowerShell session with another session running on the remote system. The commands that I enter are sent to the remote computer, executed locally, and then the results are sent back. Because all commands run locally, I don’t have to worry about an individual cmdlet lacking the plumbing to work across my network. Everything runs on the same framework, so I only need to learn the Windows PowerShell way of executing remote commands.

A major advantage over other methods of remote management is that a single port is used for every application that uses WSMan. Instead of poking different holes in a firewall for every application, only the port used by WSMan needs to be configured, and the WinRM service will make sure the traffic gets routed to the correct application.

There are several authentication methods, including Kerberos protocol and Windows Challenge/Response. The communication between two computers is encrypted at the protocol layer, except when basic access authentication is used, which is intended for use with Hypertext Transfer Protocol Secure (HTTPS) sessions.

Besides the simplicity of PowerShell Remoting (after it’s configured, there is very little to worry about), there are some massive performance benefits when using one-to-many or fan-out remoting. These performance benefits convinced me to start converting some of my VBScript scripts into Windows PowerShell because it saved so much time. With fan-out remoting, I provide Windows PowerShell a list of computers along with the command I want them to run. Windows PowerShell “fans-out” and sends the command to the remote computers in parallel. Each remote system runs the command locally and sends the results back. This is different from the common VBScript technique of using a foreach loop to perform operations against a list of computers, one at a time.

When talking about PowerShell Remoting at a conference or similar event, it’s difficult to demonstrate the benefits because fan-out doesn’t really reach its potential until I throw hundreds or thousands of computers at it. It sounds powerful on paper, but I needed some real-world numbers to help communicate the effectiveness. I also needed some data to convince my own organization, so I performed some tests that would help articulate how powerful this feature can be.

A scenario where I commonly use PowerShell Remoting is when I need to query a large number of computers for a specific event. For my performance testing, I decided to search the security event log for the last twenty log-on events. To get baseline data without using PowerShell Remoting, I stored a list of computer names in a $Computers variable and piped it to a loop.

$Computers | foreach { Get-WinEvent –FilterHashTable @{logname=”security”;id=4624} –MaxEvents 20 –ComputerName $_ }

For the comparison, I used the same Get-WinEvent cmdlet, but in conjunction with Invoke-Command, which is a PoweShell Remoting command. Invoke-Command takes my list of computer names and tells them to run the command specified in the script block. The ThrottleLimit parameter is telling Windows PowerShell to connect to 50 computers simultaneously.

Invoke-Command –ComputerName $Computers –ScriptBlock { Get-WinEvent –FilterHashTable @{logname=”security”;id=4624} –MaxEvents 20 } –ThrottleLimit 50

Image of results

By using a foreach loop, similar to how it might be done with VBscript or without PowerShell Remoting, it took over six hours to complete the operation against 100 computers. By using PowerShell Remoting, it took 15 seconds. This is a real-world situation on a production network against Windows 7 computers that were multiple wide area network (WAN) hops away in many cases. By using this same command, I increased the number of computers to see how well it scaled.

Image of results

With PowerShell Remoting, I can retrieve the last twenty log-on events from the local security log on 1000 workstations in a little over two minutes.

PowerShell Remoting is the killer feature in Windows PowerShell. When it’s configured in an environment, it provides a transparent and efficient framework for managing computers. It has saved me countless hours and simplified many daily tasks. No matter what type of environment you have, PowerShell Remoting is worth checking out.

~Jason

Thank you, Jason, for an excellent blog. We look forward to Part Two tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

An Introduction to PowerShell Remoting Part Two: Configuring PowerShell Remoting

$
0
0

Summary: Guest Blogger, Jason Hofferle, continues his series about PowerShell Remoting.

Microsoft Scripting Guy, Ed Wilson, is here. This week I am in Seattle, Washington speaking at Microsoft TechReady 15. Therefore, we have a series written by guest blogger, Jason Hofferle, about PowerShell Remoting. Here is a little bit about Jason:

Photo of Jason Hofferle

Jason Hofferle has been an IT professional since 1997. His experience includes enterprise desktop engineering for a Fortune 500 financial institution and Active Directory design for local governments and law enforcement. Jason currently works for the Defense Contract Management Agency, where he implements new technology such as virtual desktop infrastructure. He recently has been speaking about Windows PowerShell at SQL Saturday and IT Pro Camp events in Florida, and he frequently attends the Tampa PowerShell User Group.

Blog: Force Multiplication through IT Automation
Twitter: @jhofferle

In the first blog post of this series, An Introduction to PowerShell Remoting: Part One, I took a look at what PowerShell Remoting is and how it takes advantage of the Web Services for Management (WSMan) framework to provide a uniform way to manage remote computers. Maybe after seeing some of the possible performance benefits, you’ve decided to at least take a closer look at what’s required to get it up and running in your environment. In this post I’m going to discuss the requirements and configuration.

PowerShell Remoting requires that Windows PowerShell 2.0 is installed on all computers that are being remotely managed or being used to connect to those remote systems. Windows 7 and Windows Server 2008 R2 include Windows PowerShell 2.0 with the operating system. For older operating systems, the Windows Management Framework Core can be downloaded and installed on Windows Vista, Windows XP, Windows Server 2008, and Windows Server 2003. The framework includes WinRM 2.0 and Windows PowerShell 2.0, and it requires the Common Language Runtime (CLR) 2.0, which is included with the Microsoft .NET Framework 2.0 or later.

Even on operating systems that include the necessary components, PowerShell Remoting is disabled by default, so it needs some configuration before it can be utilized. The WinRM service needs to be running, and a listener has to be configured, which tells the computer to listen for incoming connections. Also, the Windows firewall needs to be configured with rules to allow incoming connections.

PowerShell Remoting is incredibly simple to configure in a domain environment by using Group Policy. On a server operating system, the Windows Remote Management service is set to start automatically, but on a client operating system, this needs to be configured. Setting services to start automatically can be done at Computer Configuration\Windows Settings\Security Settings\System Services.

Image of menu

To setup the listener, the Enable Automatic Configuration of Listeners setting can be configured at Computer Configuration\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service. An IP can be specified for systems that have multiple IP addresses assigned, or asterisks can be used to listen to all addresses.

Image of menu

The firewall exception can be added at Computer Configuration\Administrative Templates\Network\Network Connections\Windows Firewall\Domain Profile. If there are no Windows XP or Windows Server 2003 systems that need to be configured, the firewall exceptions can also be configured through Computer Configuration\Windows Settings\Security Settings\Windows Firewall with Advanced Security\Inbound Rules by using a predefined rule for Windows Remote Management.

Image of menu

If Group Policy isn’t an option, or PowerShell Remoting needs to be configured on an individual basis, the Enable-PSRemoting cmdlet can be used to perform the tasks of enabling the WinRM service, configuring the listener, and putting firewall rules into place.

Image of command output

PowerShell Remoting is pretty straightforward to configure when all the computers are joined to the same Active Directory domain and running on the same network. Going beyond the internal corporate network scenario requires some additional configuration depending on the particular situation.

One of the first issues commonly experienced is the concept of trusted hosts. When I’m connecting to a remote computer, I want to verify that computer’s identity before passing it my user credentials. When using Kerberos authentication in a domain, Windows PowerShell knows that it can trust the other computer because the domain controller is capable of verifying that system’s identity. When not in a domain environment, Windows PowerShell has no way of knowing if the system you’re trying to connect to is a malicious system spoofing as a legitimate computer.

So I either need a way to verify that computer’s identity, or bypass the security precaution. By having a certificate installed on the computers from a trusted certification authority (CA), that certificate can be used to verify the system’s identity. The alternative is to modify the trusted hosts section of the WinRM configuration to say, “I know the identity of this system cannot be verified, but let me connect anyway.” Even in a domain environment, trusted hosts may need to be configured if using IP addresses to specify computers. Kerberos protocol will only work with computer names, so Windows PowerShell will default to NTLM authentication any time an IP address is used.

Image of menu

There are many WSMan configuration options, and not all of them can be managed with Group Policy. Windows PowerShell provides a WSMan: drive that can be used to view and modify the configuration.

Cd WSMan:\localhost\client

Set-Item –Path TrustedHosts –Value *.testlab.local –Force

Image of command output

Another option for configuring WSMan is the winrm VBScript. I like using this to view my configuration because it tells me which settings are being configured with a Group Policy Object.

Image of command output

There are many different ways that PowerShell Remoting can be configured, and beyond the basics, it really depends on the specifics of the environment. Fortunately, there is a wealth of information about these scenarios and more in the about_remote_requirements and about_remote_troubleshooting Help files, which provide solutions for dealing with various issues when you are trying to get PowerShell Remoting working.

~Jason

Awesome job, Jason. Thank you for sharing your insights with us. We look forward to Part Three tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

An Introduction to PowerShell Remoting Part Three: Interactive and Fan-Out Remoting

$
0
0

Summary: Guest blogger, Jason Hofferle, talks about Windows PowerShell Interactive and fan-out remoting.

Microsoft Scripting Guy, Ed Wilson, is here. TechReady 15 in Seattle has been a great event. I am really enjoying getting to see friends. Of course, I am also really enjoying Jason’s series about PowerShell Remoting. As a refresher here is a bit about Jason:

Photo of Jason Hofferle

Jason Hofferle has been an IT professional since 1997. His experience includes enterprise desktop engineering for a Fortune 500 financial institution and Active Directory design for local governments and law enforcement. Jason currently works for the Defense Contract Management Agency, where he implements new technology such as virtual desktop infrastructure. He recently has been speaking about Windows PowerShell at SQL Saturday and IT Pro Camp events in Florida, and he frequently attends the Tampa PowerShell User Group.

Blog: Force Multiplication through IT Automation
Twitter: @jhofferle

In Part Two of this series, Configuring PowerShell Remoting, I discussed how to configure PowerShell Remoting in your environment. Now we’re going to take a look at how it can be used after it’s up and running.

There are two primary PowerShell Remoting usage paradigms for IT professionals: interactive and fan-out. Interactive remoting is used when I need to interact with a remote computer as if I was sitting directly in front of the system, logged into the console. Fan-out remoting is used when I have a single command or script that I want to run on a group of computers. It could be two systems, or two thousand systems. Whenever I need a command to efficiently execute on a large number of systems, fan-out is the way to go.

To use interactive remoting, or one-to-one, I utilize the Enter-PSSession cmdlet with the ComputerName parameter. When my prompt changes to reflect the remote computer’s name, I know that I’m interacting with the remote system. This is great for performing actions that don’t have built-in functionality for performing actions against remote systems, such as registering a dynamic link library (DLL) to correct an issue. When I’m finished working on the remote computer, the Exit-PSSession cmdlet closes the session, and my prompt returns to the local operating system.

Enter-PSSession –ComputerName DC1

Set-Location C:\Windows\System32

Regsvr32.exe .\capiprovider.dll /s

Image of command output

When I want to use fan-out remoting, or one-to-many, I turn to the Invoke-Command cmdlet. This time I use a list of computer names for the ComputerName parameter, and I provide the command that I want them to run for the ScriptBlock parameter. Because the command executes on the remote computer, tasks such as searching and filtering the event log are performed locally, and only the information I want is sent over the network. When using Invoke-Command, each returned object has a PSComputerName parameter added, which enables me to determine which remote computer each object came from.

Invoke-Command –ComputerName DC1,Win7,Win7-2 –ScriptBlock {Get-Service The*}

Image of command output

Typing commands into a script block can be tedious and error prone when it’s more than something simple. Invoke-Command has a FilePath parameter than can be used when an entire script needs to be executed remotely. Windows PowerShell takes the .ps1 file on the local computer, and converts it into a script block automatically.

Image of menu

Invoke-Command –ComputerName DC1,Win7 –FilePath C:\MyScript.ps1

Image of command output

PowerShell Remoting can also be used in conjunction with background jobs. The AsJob parameter of Invoke-Command allows a long-running PowerShell Remoting command to run in the background, freeing up the Windows PowerShell console for other tasks. When the job has completed, the results can be retrieved with the Receive-Job cmdlet.

Invoke-Command –ComputerName DC1,Win7 –ScriptBlock {Get-Service WinD*} –AsJob

Image of command output

PowerShell Remoting is extremely useful in situations where I need to quickly collect information from systems, like performing ad-hoc queries on event logs. During a recent deployment of Windows 7, we experienced frequent issues with Outlook losing connectivity with Exchange. It was determined that a specific chip set combined with a particular driver on this particular operating system enabled a power-saving feature on the network adapter. Every time the monitor went into sleep mode, the adapter renegotiated the network speed to the lowest possible value. This disconnected the network for a few seconds, which was enough to cause Outlook to complain. Users would come back from a meeting and find that Outlook wasn’t working correctly.

After the fix was deployed to the Windows 7 test group, we needed to prove that the issue had been resolved. I used PowerShell Remoting to collect network disconnection events from our Windows 7 systems, exported the results to a comma separated values file, and then used Microsoft Excel to generate a chart showing how the disconnection events significantly dropped after the fix. In a few minutes, I was able to produce hard evidence that we resolved our remaining issue and get the green light for Windows 7 deployment.

~Jason

Way cool stuff, Jason. Thank you for taking the time to share with us today. We look forward to Part Four tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

An Introduction to PowerShell Remoting Part Four: Sessions and Implicit Remoting

$
0
0

Summary: Guest blogger, Jason Hofferle, talks about creating Windows PowerShell sessions and using implicit remoting.

Microsoft Scripting Guy, Ed Wilson, is here. Jason continues to hit home run after home run this week. Today is no exception. He talks about one of the coolest features in Windows PowerShell—that of implicit remoting. Here is Jason.

Photo of Jason Hofferle

Jason Hofferle has been an IT professional since 1997. His experience includes enterprise desktop engineering for a Fortune 500 financial institution and Active Directory design for local governments and law enforcement. Jason currently works for the Defense Contract Management Agency, where he implements new technology such as virtual desktop infrastructure. He recently has been speaking about Windows PowerShell at SQL Saturday and IT Pro Camp events in Florida, and he frequently attends the Tampa PowerShell User Group.

Blog: Force Multiplication through IT Automation
Twitter: @jhofferle

In Part Three of this series, Interactive and Fan-Out Remoting, I talked about using Enter-PSSession and Invoke-Command to run commands on remote computers. In this post, I’m going to get into persistent sessions and using implicit remoting.

When using the ComputerName parameter with Invoke-Command, authentication is completed, the remoting session is established, the command is run, objects are sent back, and the remoting session is torn down. This works fine if there’s only a single command that needs to be run. But what if there’s a group of computers that need to be managed throughout the day? It’s not very efficient to go through all that overhead each time a command is run if you’re going to be managing a group of servers constantly. With PowerShell Remoting, we have the concept of sessions.

A session is a persistent connection with the remote computer. The New-PSSession cmdlet is used to open a session with one or more computers. Existing sessions can be viewed with the Get-PSSession cmdlet. By using a variable to reference the sessions I’ve created, it’s easy to use the Session parameter instead of the ComputerName parameter on Invoke-Command. Now Invoke-Command will use the existing sessions and avoid the overhead of initializing and tearing down a session each time I run a command.

$session = New-PSSession –ComputerName DC1,Win7-2

Invoke-Command –Session $session –ScriptBlock {Get-Process –Name lsass}

Image of command output

I can store several sessions in a single variable, and add sessions to this variable later. All the sessions can be created and stored in a variable, and I can run the same command against them easily. This is useful if I need different credentials to access different computers. I can start sessions with different connection options, store them in a single variable, and run commands against them all.

Image of command output

If I need to work interactively with a remote computer, I can use the Session parameter of Enter-PSSession to utilize an existing session. I can use array notation to access a specific PSSession in my $session variable, or I can pipe the session object of Get-PSSession to the Enter-PSSession cmdlet.

Image of command output

Sessions also enable a very useful capability called implicit remoting. If I’m sitting at my workstation, there may be modules and snap-ins that I’ve installed to extend the capabilities in Windows PowerShell. But if I don’t happen to be sitting at my own computer or I’ve had to rebuild my administration workstation, I might not have those cmdlets available. Wouldn’t it be nice if I didn’t have to install the Remote Server Administrator Tools (RSAT) when I needed to run the Microsoft Active Directory cmdlets?

To use implicit remoting, I start a Windows PowerShell session with a computer that already has the modules, snap-ins, or tools I need installed. In this case I want to use the Active Directory cmdlets, so I’m connecting to a domain controller. Then I use Invoke-Command to load the Active Directory module into my Windows PowerShell session on the domain controller. Finally, I use Import-PSSession with the Module parameter to automatically generate a local proxy function for the each cmdlet in the module I specified. Now I can use these remote cmdlets as if they were installed locally.

$dcSession = New-PSSession –ComputerName DC1

Invoke-Command –Session $dcSession –ScriptBlock {Import-Module ActiveDir*}

Import-PSSession –Session $dcSession –Module ActiveDir*

Image of command output

When I type a local cmdlet, Windows PowerShell calls the local cmdlet. When I type one of these imported cmdlets, Windows PowerShell calls the proxy function that takes care of the remote call for me. Windows PowerShell “implicitly” uses remoting to make everything appear like it’s happening locally. I can open sessions to my domain controller or to my servers running Exchange Server or SQL Server, and I can use Windows PowerShell to manage them all without having any of the management tools installed locally.

~Jason

WooHoo. Awesome job, Jason. Thank you for sharing your insights with us. We look forward to the exciting conclusion tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


An Introduction to PowerShell Remoting Part Five: Constrained PowerShell Endpoints

$
0
0

Summary: Guest blogger, Jason Hofferle, talks about creating constrained Windows PowerShell endpoints.

Microsoft Scripting Guy, Ed Wilson, is here. Today is the exciting conclusion to Jason Hofferle’s excellent series of articles about Windows PowerShell remoting. I think today’s article is the most important of the bunch – because it illustrates a killer security feature. Here is Jason to tell you about creating constrained Windows PowerShell endpoints.

Photo of Jason Hofferle

Jason Hofferle has been an IT professional since 1997. His experience includes enterprise desktop engineering for a Fortune 500 financial institution and Active Directory design for local governments and law enforcement. Jason currently works for the Defense Contract Management Agency, where he implements new technology such as virtual desktop infrastructure. He recently has been speaking about Windows PowerShell at SQL Saturday and IT Pro Camp events in Florida, and he frequently attends the Tampa PowerShell User Group.

Blog: Force Multiplication through IT Automation
Twitter: @jhofferle

In Part Four of this series, Sessions and Implicit Remoting, I talked about PowerShell sessions and implicit remoting, which allows commands to behave like they are being run locally when they are actually being run transparently on a remote system. In the final blog in this series, I’m going to discuss constrained endpoints, which allow me to control exactly what cmdlets can be used when I am connected to a remote computer.

When I use remoting to connect to a computer, I’m connecting to an endpoint. The Get-PSSessionConfiguration cmdlet enables me to view the currently registered endpoints. When I use Set-PSSessionConfiguration with the ShowSecurityDescriptorUI parameter, I can view the permissions for an endpoint. The default Windows PowerShell endpoints only allow access to members of the local administrators group. These permissions can be modified, or entirely new endpoints can be created.

So why would I want to create a new endpoint? It’s useful for delegation, because not only can I allow others to connect to that endpoint without granting them administrative rights to the computer, but I can also control precisely what commands they are allowed to run on that endpoint. Each endpoint can have an associated startup script that runs whenever a connection is made to that endpoint. The startup script can be used to automatically run commands, load modules, or constrain the session to limit what it can be used for.

Let’s say I want to grant my Help Desk staff access to run some commands on a server. First I’m going to create a startup script that will constrain the session. I start by restricting the session to the point where it’s useless, and then expose only what’s required for the session to work properly, with the commands that I want my Help Desk to be able to view.

foreach ($command in Get-Command)

{

    $command.Visibility = "private"

}

 

foreach ($variable in Get-Variable)

{

    $variable.Visibility = "private"

}

 

$ExecutionContext.SessionState.Applications.Clear()

$ExecutionContext.SessionState.Scripts.Clear()

$ExecutionContext.SessionState.LanguageMode = "NoLanguage"

 

$InitialSessionState =

  [Management.Automation.Runspaces.InitialSessionState]::CreateRestricted(

    "remoteserver")

foreach ($proxy in $InitialSessionState.Commands | where { $_.Visibility -eq "Public"})

{

    $cmdlet = Get-Command -Type cmdlet -ErrorAction silentlycontinue $proxy.name

    if ($cmdlet)

    {

        $alias = Set-Alias "$($proxy.name)" "$($cmdlet.ModuleName)\$($cmdlet.Name)" -PassThru

        $alias.Visibility = "Private"

    }

    Set-Item "function:global:$($proxy.Name)" $proxy.Definition

}

In Bruce Payette’s book, Windows PowerShell in Action 2nd Edition, he uses the InitialSessionState .NET class as an easy way to expose the cmdlets that are required for the session to function correctly. Up to this point, his code is boilerplate that can be used to constrain any endpoint. Now I can start exposing the commands that I want my Help Desk to see. One way to expose certain cmdlets is to change their visibility back to public.

$allowedCmdlets = @("Get-Date","Format-Wide")

Get-Command | Where-Object {$allowedCmdlets -contains $_.Name} |

    foreach {$_.Visibility = "Public"}

If I want to expose an executable or a script, it needs to be added to the endpoint’s list of allowed applications, by using a different method.

$ipConfig = (Get-Command ipconfig.exe).Definition

$ExecutionContext.SessionState.Applications.Add($ipConfig)

Custom functions can also be defined in the startup script. What’s interesting about this is that the staff connecting to this endpoint will have access to the function, but they won’t be able to use the cmdlets inside the function.

Function Get-ServerInfo

{

    $CS = Get-WmiObject -Class Win32_ComputerSystem

    $OS = Get-WmiObject -Class Win32_OperatingSystem

    $Printer = Get-WmiObject -Class Win32_Printer

    $MappedLogicalDisk = Get-WmiObject -Class Win32_MappedLogicalDisk

 

    $Result = New-Object PSObject -Property @{

        UserName = $CS.UserName

        ComputerName = "$($CS.DNSHostName).$($CS.Domain)"

        OSArchitecture = $OS.OSArchitecture

        OSName = $OS.Caption

        OperatingSystemVersion = $OS.Version

        OperatingSystemServicePack = "$($OS.ServicePackMajorVersion).$($OS.ServicePackMinorVersion)"

        DefaultPrinter = ($Printer | Where-Object {$_.Default}).Name

        TypeOfBoot = $CS.BootupState

        LastReboot = $OS.ConvertToDateTime($OS.LastBootUpTime).ToString()

        Drive = $MappedLogicalDisk |

            Select-Object @{Name='Drive Letter';Expression={$_.DeviceID}},

            @{Name='Resource Path';Expression={$_.ProviderName}}

    }

   

    Write-Output $Result

}

With my startup script finished, I can use Register-PSSessionConfiguration to create a new endpoint called HelpDesk.

Register-PSSessionConfiguration –Name HelpDesk –StartupScript C:\StartupScript –Force

Image of command output

Now I can use Set-PSSessionConfiguration to grant access to my Active Directory group HelpDesk to allow them to use remoting commands to connect to the endpoint.

Set-PSSessionConfiguration –Name HelpDesk –ShowSecurityDescriptorUI –Force

Image of command output

Now that the PowerShell Remoting endpoint has been constrained with a startup script, my Help Desk staff can access the endpoint, and they’ll only see the commands that are available to them.

Image of command output

This is also an ideal situation to use implicit remoting because I can provide access to custom functions in a single place, without needing to distribute custom modules and scripts. I can update these commands in a single place. They can be used as if they were local, but I don’t have to worry about staff using an old version of a script.

It’s important to note that PowerShell Remoting is security neutral. My Help Desk now has access to connect to my server; however, they cannot do anything that they couldn’t already do. If they don’t have rights to delete user accounts, I couldn’t provide them access to the Remove-ADUser cmdlet and expect them to have the capability to remove user accounts. Permission-denied errors will still occur, just as if they were using the Active Directory Users and Computer MMC snap-in.

The concept of constrained endpoints is used to great effect with Microsoft Office 365 and hosted Exchange Server. I have the ability to connect to a common endpoint, which exposes only the commands available for me to manage my mailboxes. This gives me the capability to automate my hosted Exchange Server environment, even though it’s completely off my network.

Image of command output

Remoting is truly the killer feature of Windows PowerShell. It’s incredibly useful now, and it has lots of future potential. The WSMan framework is a secure and reliable foundation for Microsoft and non-Microsoft software developers, and hardware manufacturers to build their management tools. Remoting can be used to create solutions today, and it will only get more impressive with the release of Windows PowerShell 3.0 in Windows 8 and Windows Server 2012.

Additional Resources

Administrator’s Guide to Windows PowerShell Remoting

Layman’s Guide to PowerShell 2.0 Remoting

Secrets of PowerShell Remoting

~Jason

Thank you, Jason, for an awesome series on remoting. Join us tomorrow for guest blogger, Will Steele.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Weekend Scripter: Using PowerShell to Aid in Security Forensics

$
0
0

Summary: Guest blogger, Will Steele, discusses using Windows PowerShell to aid with security forensics.

Microsoft Scripting Guy, Ed Wilson, is here. I have had many interesting email threads with Will Steele, and I have even spoken at the Dallas Fort Worth PowerShell User Group via Live Meeting. Therefore, it is with great pride that I introduce Will Steele.

Photo of Will Steele

Will Steele live near Dallas, Texas with his wife and three kids. He works as a senior network analyst at a financial services provider and he manages a document imaging system with a heavy investment in Microsoft enterprise technologies. Last year Will started the Dallas-Fort Worth PowerShell users group, and he contributes to the Windows PowerShell community in forums and through his blog. 

Blog: Another computer blog

Take it away Will…

Here’s a hypothetical thought about how Windows PowerShell can help in forensics registry analysis. The layout: You are a systems admin for a large IT corporation. You learn that a spreadsheet containing highly sensitive information was accessed without permission by a server in your group the previous day. Your task? Verify who opened it with one condition: you can’t use any non-Microsoft tools. You start by coming up with two simple questions:

  • Who accessed the server within the past 24 hours?
  • How, when, and where was the file accessed?

Some logs indicated which machine accessed the file, but didn’t indicate the user. Only a handful of people are now possible candidates. The number of folks with full administrative rights and access to the servers is small. Conferring with your manager about who was working yesterday, you come up with a list of four possible.

Getting down to work, you launch Windows PowerShell and plan to keep an audit trail of what you do. A log will serve perfectly as documentation of your research, so, you run this command:

md C:\research

Start-Transcript –Path C:\research\analysis.log

You then decide to see if any of these people were out of the office during the time of the incident. Remoting was enabled on your domain for all of your administrators, so, their workstations would allow you to query the workstation logs. You need an event log query to develop a timeline of log on/log off events and run it against the server. Because all the workstations run Windows 7, you use this to check for logon/logoff event IDs:

get-winevent -FilterHashTable @{LogName='Security'; StartTime='6/27/2012 12:00:00am'; ID=@(4624,4625,4634,4647,4648)} |

select timecreated,id

To identify which people you may need to look at more closely, you remotely query each machine to build a cross reference based on logon/logoffs. To save typing, you store the hash table from your server search as a $eventhashtable variable and pass it to the Get-WinEvent cmdlet inside a loop to check the four workstations.

$eventhashtable = @{LogName='Security'; StartTime='6/27/2012 12:00:00am'; ID=@(4624,4625,4634,4647,4648)};

'workstation01', 'workstation02', 'workstation03', 'workstation04' | % {

            Write “Retrieving logs for $_ at $(Get-Date)”;

            get-winevent –FilterHashTable $eventhashtable | select timecreated,id;

}

Moving on to the server, you learn that it hasn’t been rebooted since last night. This increases the likelihood that the registry still contains pertinent information. You now turn to the machine to get more details. First things first: getting to the machine without arousing suspicion. Thankfully, in Windows PowerShell, this is a trivial task.

New-PSSession -ComputerName server

To determine which hives to look at, you check the IDs for verification. This command will list all users on the machine by name and SID:

if(-not(Test-Path HKU:\))

{          

            New-PSDrive HKU Registry HKEY_USERS

}

 

dir HKU:\ |

Where {($_.Name -match 'S-1-5-[0-2][0-2]-') -and ($_.Name -notmatch '_Classes')} |

Select PSChildName |

% {

            (([ADSI] ("LDAP://<SID=" + $_.PSChildName + ">")).userPrincipalName -split '@')[0] + " - " + $_.PSChildName

}

This loads the HKEY_USERS hive as a PSDrive and passes the SID values to the domain controller via an ADSI LDAP call, which returns the UserPrincipalName. You know that there’s a good chance the UserPrincipalName will match the name of the C:\Users\<profile> on the server. The command returns the following information.

admin01 - S-1-5-21-123456789-1234567890-1234567890-8901

admin02 - S-1-5-21-123456789-1234567890-1234567890-8902

admin03 - S-1-5-21-123456789-1234567890-1234567890-8903

admin04 - S-1-5-21-123456789-1234567890-1234567890-8904

superadminjrich - S-1-5-21-123456789-1234567890-1234567890-1472

superadminjmiller - S-1-5-21-123456789-1234567890-1234567890-1567

superadminmcruz - S-1-5-21-123456789-1234567890-1234567890-3245

To double-check these users, you run a Get-WmiObject cmdlet as a sanity check.

Get-WmiObject –Class Win32_NetworkLoginProfile | select caption,lastlogon

The following WMI result set verifies that your list is valid.

caption                           lastlogon

-------                           ---------

NT AUTHORITY\SYSTEM

NT AUTHORITY\LOCAL SERVICE

NT AUTHORITY\NETWORK SERVICE

Admin01                           20120620095738.000000-480

Admin02                           20120226122356.000000-480

Admin03                           20120627144745.000000-480

Admin04                           20120627150336.000000-480

superadminjrich                       20120313150319.000000-480

superadminjmiller                      20120627145121.000000-480

superadminmcruz                       20120417020307.000000-480

The four accounts in question do have local profile data on the server, so, you move into phase 2: find out when and where the file was accessed on the server. Noticing that Admin01 and Admin02 hadn’t logged on to the server recently eliminates them. Rather than work directly against the hives on the live server, you copy NTUSER.DAT files and disconnect from the server:

'admin03','admin04' |

% {

            md "C:\research\$_"

            copy "\\server\c$\users\$_\ntuser.dat" "C:\research\$_"

}

Exit-PSSession

To start exploring the files, you need to load them into your current session. This old reg.exe command does the trick:

reg load HKLM\admin03 C:\research\admin03\ntuser.dat

Admin03’s hive now can be accessed via your PSSession under HKLM:\admin03. This way, you can explore their profile as if it were yours. To be sure this worked as expected, you check with regedit.

Image of menu

Switching over to Windows PowerShell, you start by examining most recently used (MRU) lists for this user. You recall that your manager mentioned a spreadsheet, so you look at several keys without finding the file. Finally, you find a key that piques your curiosity:

HKLM:\admin03\Software\Microsoft\Office\14.0\Excel\File MRU

Exploring the contents of the key is as simple as running this command:

PS HKLM:\admin03\Software\Microsoft\Office\14.0\Excel\File MRU > Get-ItemProperty .

When run, it produces this:

  Hive: HKEY_LOCAL_MACHINE\admin03\Software\Microsoft\Office\14.0\Excel

Name              Property

----              --------

File MRU            Max Display : 25

                Item 1   : [F00000000][T01CD5496156B3EF0][O00000000]*C:\Data\Documents\Powershell\Projects\Encoding\FormatTable.xlsx

You notice some weird values prefixing the file paths. Apparently the bracketed values are metadata for Excel. Interestingly, [T01CD5496156B3EF0] is a non-standard 64-bit Windows date and time stamp that is stored as hexadecimal. To convert it from the registry value to a [DateTime] object you use the following:

PS HKLM:\admin03\Software\Microsoft\Office\14.0\Excel\File MRU> Get-ItemProperty . | `

select 'item *' | `

% {$_ -split '\[T'} | % {$_ -split '\]\['} | Where {$_ -notmatch '\\'} | `

% {([Datetime][Convert]::ToInt64($_,16)).AddHours(-8)}

A list of times ordered according to how they appear in the key is produced, but you notice that there is something weird. All the time stamps are exactly 1600 years (and a few hours) off:

Wednesday, June 27, 0412 12:53:04 PM

You recall .NET DateTime objects presume January 1, 1600 as a start date. You accommodate for this with this change:

% {[DateTime]::FromFileTime([Convert]::ToInt64($_,16))}

There is proof that the file was opened when Admin03 was on call:

Wednesday, June 27, 2012 12:53:04 PM

To validate your research, some C# gets LastWriteTimes directly from the registry:

$signature = @"

using Microsoft.Win32.SafeHandles;

using System;

using System.Runtime.InteropServices;

using System.Text;

 

namespace Forensics

{

  public class Registry

  {

    private static readonly IntPtr HKEY_DYN_DATA = new IntPtr(-2147483642);

    private static readonly IntPtr HKEY_CURRENT_CONFIG = new IntPtr(-2147483643);

    private static readonly IntPtr HKEY_PERFORMANCE_DATA = new IntPtr(-2147483644);

    private static readonly IntPtr HKEY_USERS = new IntPtr(-2147483645);

    private static readonly IntPtr HKEY_LOCAL_MACHINE = new IntPtr(-2147483646);

    private static readonly IntPtr HKEY_CURRENT_USER = new IntPtr(-2147483647);

    private static readonly IntPtr HKEY_CLASSES_ROOT = new IntPtr(-2147483648);

                                               

    private const int KEY_QUERY_VALUE = 1;

    private const int KEY_SET_VALUE = 2;

    private const int KEY_CREATE_SUB_KEY = 4;

    private const int KEY_ENUMERATE_SUB_KEYS = 8;

    private const int KEY_NOTIFY = 16;

    private const int KEY_CREATE_LINK = 32;

    private const int KEY_WRITE = 0x20006;

    private const int KEY_READ = 0x20019;

    private const int KEY_ALL_ACCESS = 0xF003F;

    public DateTime last;

                                               

    [DllImport("advapi32.dll", CharSet = CharSet.Auto)]

    private static extern int RegOpenKeyEx(

                SafeRegistryHandle hKey,

                string lpSubKey,

                uint ulOptions,

                uint samDesired,

                out SafeRegistryHandle hkResult

                                                );

                                               

    [DllImport("advapi32.dll", CharSet = CharSet.Auto)]

    private static extern int RegQueryInfoKey(

                SafeRegistryHandle hKey,

                StringBuilder lpClass,

                uint[] lpcbClass,

                IntPtr lpReserved_MustBeZero,

                ref uint lpcSubKeys,

                uint[] lpcbMaxSubKeyLen,

                uint[] lpcbMaxClassLen,

                ref uint lpcValues,

                uint[] lpcbMaxValueNameLen,

                uint[] lpcbMaxValueLen,

                uint[] lpcbSecurityDescriptor,

                uint[] lpftLastWriteTime

                                                );

                                               

    public static DateTime GetRegKeyLastWriteTime(string regkeyname)

    {

      string[] parts = regkeyname.Split('\\');

      string sHive = parts[0];

      string[] SubkeyParts = new string[parts.Length - 1];

      Array.Copy(parts, 1, SubkeyParts, 0, SubkeyParts.Length);

      string sSubKey = string.Join("\\", SubkeyParts);

      SafeRegistryHandle hRootKey = null;

      switch (sHive)

      {

        case "HKEY_CLASSES_ROOT": hRootKey = new SafeRegistryHandle(HKEY_CLASSES_ROOT, true); break;

        case "HKEY_CURRENT_USER": hRootKey = new SafeRegistryHandle(HKEY_CURRENT_USER, true); break;

        case "HKEY_LOCAL_MACHINE": hRootKey = new SafeRegistryHandle(HKEY_LOCAL_MACHINE, true); break;

        case "HKEY_USERS": hRootKey = new SafeRegistryHandle(HKEY_USERS, true); break;

        case "HKEY_PERFORMANCE_DATA": hRootKey = new SafeRegistryHandle(HKEY_PERFORMANCE_DATA, true); break;

        case "HKEY_CURRENT_CONFIG": hRootKey = new SafeRegistryHandle(HKEY_CURRENT_CONFIG, true); break;

        case "HKEY_DYN_DATA": hRootKey = new SafeRegistryHandle(HKEY_DYN_DATA, true); break;

      }

      try

      {

        SafeRegistryHandle hSubKey = null;

        int iErrorCode = RegOpenKeyEx(hRootKey, sSubKey, 0, KEY_READ, out hSubKey);

        uint lpcSubKeys = 0;

        uint lpcValues = 0;

        uint[] lpftLastWriteTime = new uint[2];

        iErrorCode = Registry.RegQueryInfoKey(hSubKey, null, null, IntPtr.Zero,

        ref lpcSubKeys, null, null, ref lpcValues, null, null, null, lpftLastWriteTime);

        long LastWriteTime = (((long)lpftLastWriteTime[1]) << 32) + lpftLastWriteTime[0];

        DateTime lastWrite = DateTime.FromFileTime(LastWriteTime);

        return lastWrite;

      }

      finally

      {

        if (hRootKey != null && !hRootKey.IsClosed)

        {

          hRootKey.Close();

        }

      }

    }

  }

                       

  public sealed class SafeRegistryHandle : SafeHandleZeroOrMinusOneIsInvalid

  {

    public SafeRegistryHandle() : base(true) { }

    public SafeRegistryHandle(IntPtr preexistingHandle, bool ownsHandle)

      : base(ownsHandle)

    {

      base.SetHandle(preexistingHandle);

    }

                                               

    [DllImport("advapi32.dll")]

    private static extern int RegCloseKey(IntPtr hKey);

    protected override bool ReleaseHandle()

    {

      return (RegCloseKey(base.handle) == 0);

    }

  }

}

"@

Searching against the registry key in question to validate your findings, you add the new type to your session:

Add-Type -TypeDefinition $signature -Language CSharp -PassThru | Out-Null;

And you search for LastWriteTime values:

 dir 'HKLM:\admin03\Software\Microsoft\Office\14.0\Excel' |

% { ($_.PSPath -split ':')[2] } |

Where {[Forensics.Registry]::GetRegKeyLastWriteTime($_) -gt (Get-Date).AddDays(-2)} |

% { "$($_): $([Forensics.Registry]::GetRegKeyLastWriteTime($_))"};

This outputs the following:

HKEY_LOCAL_MACHINE\admin03\Software\Microsoft\Office\14.0\Excel\File MRU: 06/27/2012 13:53:04

HKEY_LOCAL_MACHINE\admin03\Software\Microsoft\Office\14.0\Excel\Options: 06/27/2012 08:47:50

HKEY_LOCAL_MACHINE\admin03\Software\Microsoft\Office\14.0\Excel\Place MRU: 06/27/2012 13:53:04

HKEY_LOCAL_MACHINE\admin03\Software\Microsoft\Office\14.0\Excel\Resiliency: 06/27/2012 08:47:50

The MRU time stamp confirms overlaps with your findings, supports your conclusion, and gives you evidence that you can hand over to your manager about exactly when and what had happened.

~Will

Thank you, Will, for sharing your time and knowledge. It is a great blog post.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Parse PowerShell Transcript Files to Pull Out All Commands

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using the Windows Powershell transcript tool to keep track of commands that are run in a session.

Microsoft Scripting Guy, Ed Wilson, is here. I am busting out all over and I hardly know where to begin. Suffice it to say that TechReady 15 in Seattle was awesome. I especially loved the Ask the Experts session on Wednesday night. I always love the Ask the Experts sessions—the one at TechEd in Orlando this year was awesome, as was the one in Atlanta the year before.

Of course, I thought my session, Using Windows PowerShell 3.0 to Manage the Remote Windows 8 Desktop was really cool (and well received)—so I am going to present a version of it at PowerShell Saturday in Charlotte, North Carolina on September 15, 2012. The registration for this high-profile event opens tomorrow, July 30, 2012. The PowerShell Saturday event in Columbus, Ohio sold out in 13 days, and I expect the one in Charlotte to be no different. Therefore, if you want to attend, you need to begin queuing up at your keyboard around midnight Pacific Time in preparation for the 200 tickets to go on sale.

Recently, JRV posted on one of my blog posts (he does that a lot), and he stated that I must have access to some secret inside vault of information (my words, not his) because of all the cool stuff I come up with in my postings (again, my words, not his). This brings up an interesting story…

When Microsoft hired me eleven years ago, one of the things I was really looking forward to was getting to see “all the secret stuff.” After a month, I emailed my friend Bill (Mell, not Gates), and I told him, “Dude, there is no secret stuff.” Obviously, when products are in development, we need to guard our secrets. But after it ships, there is no secret vault of information (at least not where I am concerned).

How to find out the cool stuff about PowerShell

One of the really neat features of Windows PowerShell is that it is self-describing. This means that I use Windows PowerShell to learn stuff about Windows PowerShell. There are three powerful tools for doing this: Get-Help, Get-Member, and Get-Command. Learning how to use each of these cmdlets to their maximum potential is essential to becoming a Windows PowerShell guru. I will add two additional cmdlets to this list: Start-Transcript, and for Windows PowerShell 3.0, Show-Command.

Note   The first thing you want to do in Windows PowerShell 3.0 is open Windows PowerShell as an administrator and run the Update-Help –module * -Force command. This will ensure that you have the latest Help information. For Windows PowerShell 2.0, you use Get-Help cmdletname –online to see the most up-to-date help.

Start-Transcript? You may ask, “What’s up with that?” Well, by using the Start-Transcript cmdlet, I record the commands and the output from my explorations. In fact, I find the cmdlet so useful that it is in my Windows PowerShell console profile. Unfortunately, it does not work in the Windows PowerShell ISE, although I did write a function that includes some of the capability in my post, Create a Transcript of Commands from the Windows PowerShell ISE.

Therefore, if I find a really cool thing I can do in Windows PowerShell, I already have a record of that cool thing. There is only one problem with this approach: it quickly becomes unruly. With no management of the transcripts, it does not take long to fill up the Documents folder as shown here on my Windows 8 laptop.

Image of folder

With opening and closing the Windows PowerShell console several times a day, it does not take very long until the folder is filled with transcript files. One thing that can help in finding commands is to use the desktop Search tool. (I always ensure that I do a full text index for text files in addition to all of the Windows PowerShell type of files). With the Content view of files, you can easily see an idea of what is inside of a particular transcript file. This is shown in the image that follows.

Image of search results

On the other hand, I like to have something a bit more permanent than a simple search result. To this end, I wrote a Windows PowerShell script that parses a folder full of Windows PowerShell transcript files, and it pulls out all of the Windows PowerShell commands (those that work and those that did not work, so keep that in mind) and it puts them into a new text file. The script is rather cumbersome, and it is one that I wrote nearly four years ago. But a great thing about Windows PowerShell is that the team is committed to saving your investment—in terms of learning Windows PowerShell and in terms of writing Windows PowerShell scripts. So nearly all of the stuff I did on Windows PowerShell 1.0 (and to some extent, even during the beta of Windows PowerShell 1.0) still work today.

I copy the transcript files to a transcript folder (not in my Documents folder), and then I open the ParseTranscriptPullOutCommands.ps1 script in the Windows PowerShell ISE, and I edit the Path parameter to point to my new transcript folder location. When I run the script, it parses all of the transcript files in the folder, and it pulls out all the commands. It works great. The image that follows is the text file that is produced by running the script.

Image of command output

I uploaded it to the Scripting Guys Script Repository: Pull PowerShell commands from transcript file.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Explore Nested Directories and Files

$
0
0

Summary: The Microsoft Scripting Guy talks about using the Get-ChildItem cmdlet to find, sort, and count folders.

Microsoft Scripting Guy, Ed Wilson, is here. In the summer, one’s heart turns to, well, numbers if one happens to work for Microsoft. Yes, it is review season. During review season, one has to figure out everything he (or she) has done during the previous year. So if you are a Scripting Guy, part of that has to do with how many Hey, Scripting Guy! Blog posts you wrote, how many Windows PowerShell scripts you wrote, how many screenshots you took, and so on. Luckily, they do not ask how many Tim-Tams I ate or how many cups of tea I drank. (I could probably figure it out, but I would not be able to use Windows PowerShell to do that. We have not yet written a kitchen provider). Today I am going to answer the question of how long have I been writing the Hey, Scripting Guy! Blogs.

Three secrets to using the Get-ChildItem cmdlet

When you use the Get-ChildItem cmdlet, there are three things you must know that are not immediately obvious:

  1. Use the force parameter to view hidden or system files.
  2. Use the recurse parameter to see subdirectories and nested files.
  3. Use the psIsContainer parameter to see only directories.

In the output shown here, the dir command (an alias for the Get-ChildItem cmdlet) returns only a few items.

PS C:\> dir

 

    Directory: C:\

 

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

d-r--         7/22/2012   7:18 AM            data

d----         7/21/2012  11:41 AM            files

d----         7/23/2012   5:52 PM            fso

d----         7/22/2012   8:55 PM            Intel

d----         7/14/2012   9:48 AM            PerfLogs

d-r--         7/22/2012   8:55 PM            Program Files

d-r--         7/22/2012   8:55 PM            Program Files (x86)

d----         7/22/2012   5:33 PM            trans

d-r--         7/22/2012   8:56 PM            Users

d----         7/23/2012  11:26 AM            VMs

d----         7/22/2012   8:56 PM            Windows

Use the Force to see hidden and system files

On the other hand, when I use the force parameter, many more items return. This action corresponds with the Show hidden and system files options in the folder view tool for Explorer. The thing is, this is an easy switch to forget, and one that really makes you feel foolish when someone reminds you of that oversight. It is a Homer Simpson doh! moment. This command is shown here.

Dir -force

The command the output associated with the command are shown in the image that follows.

Image of command output

Burrow into nested folders

If I need to burrow down into a directory structure, I need to use the recurse parameter. If I go back to my previous command from the root of the C:\ drive, and I add the recurse switch, it will return every single file on the entire computer hard disk drive, and will therefore take a bit of time. If, for example, I change to the ScriptingGuys folder, and I use the recurse parameter, I might use a command such as the one shown here.

Get-ChildItem -Path C:\data\ScriptingGuys -recurse

The output from the command begins by listing all of the folders under the ScriptingGuys folder. Then it lists files that appear in the root of the ScriptingGuys folder. When that process completes, it begins to burrow into the other folders. The command and the initial output from the command are shown in the image that follows.

Image of command output

Finding only folders or directories

To only return folders (or directories…or whatever we call them), use the psIsContainer property. This property returns a Boolean value, and it is therefore easy to toss into a Where-Object filter. The command to return only folders within my ScriptingGuys directory is shown here.

Get-ChildItem -Path C:\data\ScriptingGuys -recurse | where {($_.psiscontainer)}

In Windows PowerShell 3.0, the command is simplier due to leaving off the braces and the $_ character. The syntax is shown here.

Get-ChildItem -Path C:\data\ScriptingGuys -recurse | where psiscontainer

The command to return nested folders and the associated output are shown here.

Image of command output

I create a folder each week for my Hey, Scripting Guy! blogs. Suppose I want to know how long I have been writing the Hey, Scripting Guy! Blogs. One way to get this information is to figure out how many folders I have. Because I use a Windows PowerShell script to create and name my folders, I am assured that they are all named the same. (I wrote the script after I became the Scripting Guy, so a few of the first folders are not all capitalized the same.) To do this, I use the Get-ChildItem cmdlet to find the folders, and I pipe the resulting DirectoryInfo objects to the Measure-Object cmdlet. This command is shown here.

PS C:\> Get-ChildItem -Path C:\data\ScriptingGuys -recurse | where {($_.psiscontainer)}  |

Measure-Object

Count    : 289

Average  :

Sum      :

Maximum  :

Minimum  :

Property :

But as we saw, there are folders that do not begin with HSG in the 289 folder count. I created these extra folders for things like the Scripting Games. So to remove them from the count, I use a simple Regular Expression pattern (‘^HSG’). The carrot here means that the folder name does not begin with the letters HSG—so that would be all of my Scripting Games folders and associated folders for articles that are not related to the Hey, Scripting Guy! Blog.

Note   This points to the value of using a Windows PowerShell script to do routine admin tasks. You can be certain that they are all accomplished in the same manner, and it gives you great value when you later need to use a script to gather that information.

The resulting command to find all of the folders that are not HSG folders is shown here.

PS C:\> Get-ChildItem -Path C:\data\ScriptingGuys -recurse | where {$_.psiscontainer

-AND $_.name -notmatch '^hsg'} | measure

Count    : 75

Average  :

Sum      :

Maximum  :

Minimum  :

Property :

Note   Because I did not specify a case sensitive search, the pattern ^hsg works the same as ^HSG.

I now subtract the original number of folders from the non-HSG folders, and arrive at the answer as to how long I have been writing the Hey, Scripting Guy! Blog. The results are in the image shown here.

Image of command output

Playing with Files Week will continue tomorrow when I explore some more cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Help Find All of Your Images

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to help locate pictures or images on the hard drive.

Microsoft Scripting Guy, Ed Wilson, is here. Yesterday, I talked a little bit about review time at Microsoft. We have to count everything so that we can provide our managers with feedback. Luckily, Windows PowerShell has the Measure-Object cmdlet, so it is pretty easy. Windows PowerShell also works with dates really easily, so it is simple enough to write a script to find all images with a file creation date between July 1, 2011 and June 30, 2012. If you have your hard disk drive organized in such a way that only work related pictures or images reside when you do the search, dude, you are golden.

For me, it is easy to find all of the images or all of the articles I wrote this year. This is because of how my Data folder is laid out. I can view the high-level layout by using the Get-ChildItem cmdlet without the recurse option and filtering for the directories. The Windows PowerShell 2.0 command is shown here.

dir C:\data\ScriptingGuys | where {$_.PsIsContainer}

By using the simple Windows PowerShell 3.0 syntax, the command becomes the following.

dir C:\data\ScriptingGuys | where PsIsContainer

The command and the output associated with the command are shown here.

Image of command output

Simplify things by using a PS Drive

Because the folder layout is so structured, it is easy to find items on a year-by-year basis. For example, I can create a custom PS Drive that is rooted in 2012, and my commands are vastly simplified. The first thing I do is store the current location on the stack by using pushd (alias for Push-Location). Next, I create a new Windows PowerShell drive that is rooted in the 2012 folder. This will simplify my command line and make it easier to work. Now I change to that location (sl is an alias for Set-Location) and finally I use dir (an alias for Get-ChildItem) to find all of the .jpg, .png and .bmp types of files. To count them all, I use the Measure-Object cmdlet (measure is an alias). The commands is shown here.

pushd

New-PSDrive -Root C:\data\ScriptingGuys\2012 -PSProvider filesystem -Name hsg

sl hsg:

dir -Recurse -include *png,*jpg,*bmp | measure

The commands and the output associated with the commands are shown here.

Image of command output

If there are 738 images, how many documents have there been this year? A quick change of the Include parameter to *doc and *docx finds that there have been nearly 300 documents this year.

PS hsg:\> dir -Recurse -include *doc,*docx | measure

 

Count    : 296

Average  :

Sum      :

Maximum  :

Minimum  :

Property : 

Wow! 296 articles have been created. But how many days have there been so far this year? Well, that information is always available from the Get-Date cmdlet, as shown here.

PS hsg:\> (Get-Date).DayOfYear

209

The difference between the number of Word documents and the number of days is that on many occasions, the Hey, Scripting Guy! Blog publishes more than once a day. For example, during the Scripting Games and during TechEd this year, it was not uncommon to have four or even five postings a day. What is interesting is that I am running an average of 2.5 images per blog this calendar year.

Finding images created during a specific time range

To find images that I created during the time range of July 1, 2011 and June 30, 2012, I need to search multiple folders. This is because I organized my folders by calendar year, not by fiscal year. Therefore, I need to back up a bit in the folder structure. For me, I will back up to the root of the ScriptingGuys folder and begin my search there. The command is a one-liner, and at first I let it run and display all of the images so I can confirm that the command works properly. I decide to sort the results to make it obvious (otherwise the dates begin with October and go sideways in chronology). The trick is to use the compound Where-Object (where is an alias), and the greater than (-gt) and less than or equal (-le) operators to provide only the dates desired. Here is the basic command.

dir -path C:\data\ScriptingGuys -Recurse -include *.png,*.jpg,*.bmp |

where {$_.LastWriteTime -gt [datetime]"7/1/11" -AND $_.lastwritetime -le [datetime]"6/30/12"}  |  

sort  lastwritetime

The command and the associated output are shown here.

Image of command output

To count the number of images during the past fiscal year, all that I need to do is add the Measure-Object cmdlet. I do not need the Sort-Object cmdlet, so I remove it from the command. The revised command and associated output are shown here.

PS C:\> dir -path C:\data\ScriptingGuys -Recurse -include *.png,*.jpg,*.bmp |

where {$_.LastWriteTime -gt [datetime]"7/1/11" -AND $_.lastwritetime -le [datetime]"6/30/12"} |  

Measure-Object

 

Count    : 1228

Average  :

Sum      :

Maximum  :

Minimum  :

Property :

Playing with Files Week will continue tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Find All Word Documents that Contain a Specific Phrase

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses using Windows PowerShell to search a directory structure for Word documents that contain a specific phrase.

Microsoft Scripting Guy, Ed Wilson, is here. Exciting news—actually two pieces of exciting news. This month, I am starting a new series. I call it PowerTips, and each day, I will have an additional posting of a short Windows PowerShell tip, trick, or question and answer. The postings will appear midday Pacific Standard Time. I think you will enjoy them—I know I am having fun writing them.

Now for the second piece of exciting news. The registration site for Charlotte, North Carolina PowerShell Saturday is open. At this point, there are still plenty of tickets available, but the last PowerShell Saturday sold out in 13 days, so you will want to register quickly for this event to ensure you have a place. We are running three tracks (Beginner, Applied, and Advanced), so there is sure to be something there for everyone. I am making a couple of presentations, as are a couple of Microsoft premier field engineers, and even a Microsoft Windows PowerShell MVP. The lineup of speakers is stellar.

Finding guest blogger posts

It seems like I am not very good at anticipating future needs—at least exact needs. But because I use Windows PowerShell so much to do so many things, I am at least consistent. When your data is consistent, you have a fighting chance of solving a particular issue. I use Windows PowerShell to create all of my individual Microsoft Word documents, based on a template that my editor, Dia Reeves, created for me. Because of this, the structure of all my blog posts is relatively consistent.

When I first started the Hey, Scripting Guy! Blog, one of the first projects I spent a lot of time working on to describe the blog posts was Developing a Script Taxonomy. I carried over this taxonomy to the TechNet Script Center Script Repository. Therefore, I am pretty much assured that blog posts related to a specific topic will contain a specific set of words.

The Scripting Wife recommended that I create a blog tag called “guest blogger” for each of the guest blogs. The only thing we (meaning me) messed up was that the line in the template for the tags is Normal style. Microsoft Word uses the Normal style in a document for the bulk of the text. If I had of used a specific style (such as Heading 9), it would be easier to find a specific text string that uses a specific word style. The following image illustrates what my Microsoft Word document looks like after I have edited a guest blog.

Image of document

Return guest blogs via script

I am running the beta version of Office 2013, and it works really well. The thing that is interesting is that, as far as I can tell (at least so far), the Microsoft Word automation model has not changed. Therefore, I do not need to reinvent the entire script. I based my script on a script I wrote in December 2009 for the Hey, Scripting Guy! Blog, How Can I Italicize Specific Words in a Microsoft Word Document.

Note   Because much of today’s script came from the previous script, you should refer to that blog post for additional details about the script construction.

The script I use today does the following:

  1. It starts at a specific location in the directory hierarchy, and it selects Microsoft Word documents that begin with the letters HSG or WES (for Hey Scripting Guy or Weekend Scripter).
  2. These Word Documents were last written to between July 1, 2011 and June 30, 2012. For details about finding documents written within a certain time span, see yesterday’s blog post, Use PowerShell to Help Find All of your Images.
  3. It produces a total count of documents that contain the words “guest blogger” in the content of the document.
  4. It produces a total count of all words from all documents that contain the words “guest blogger.”

Items I would like my script to do, but I do not have time for right now:

  1. Return a custom object with the following:
    1. Title of the blog
    2. Author of the guest blog
    3. Summary of the blog
    4. Tags for the blog
    5. Name of the file
  2. Export to a CSV file.

First things first

There is only one parameter: the Path to the parent directory from where the search begins. I could have added at least three other parameters: BeginDate, EndDate, and SearchTerm but I did not. Those values are hardcoded in the script itself. But exposing these values as variables would be a GREAT first step towards writing a better script. After creating the initial parameter, I initialize the variables used for the Find.Execute method. By creating and initializing the variables with their values, the method signature is much more readable than if everything was hard-coded in. Here is the initial section of the script.

[cmdletBinding()]

Param(

 $Path = "C:\data\ScriptingGuys"

) #end param

 

$matchCase = $false

$matchWholeWord = $true

$matchWildCards = $false

$matchSoundsLike = $false

$matchAllWordForms = $false

$forward = $true

$wrap = 1

Now create the objects

While creating the basic variables (there are a few remaining to create), it is also time to create the main object. Whether working with Word, Excel, PowerPoint, Outlook (and so on), the main object is always the application object. The Word.Application object is a COM object; therefore, I use New-Object –comobject to create the application object. I store the returned Word.Application object in the $application variable. I also set the Application.Visible property to $false to keep the Microsoft Word program from springing to life. However, if you accidently (or on purpose) open Microsoft Word while the script runs, you will be plummeted with multitudes of Microsoft Word windows opening and closing as the script progresses (at least that is what happened when I did that while using the beta version of Word 2013 and running the script). The code is shown here.

$application = New-Object -comobject word.application

$application.visible = $False

I use the Get-ChildItem cmdlets to find all the Word documents that begin with HSG or WES and that were last written to between July 1, 2011 and June 30, 2012. I store the matching FileInfo objects in the $docs variable. The command to do this is shown here.

$docs = Get-childitem -path $Path -Recurse -Include HSG*.docx,WES*.docx |

  where {$_.LastWriteTime -gt [datetime]"7/1/11" -AND $_.lastwritetime -le [datetime]"6/30/12"}

I now initialize and create a few more variables. The first variable is used to store the text for which to search. Next the $i variable is a counter that is used by the Write-Progress cmdlet to display the progress of the search operation. This takes a while, so using the Write-Progress cmdlet to display up-to-date progress and status information is a good idea. The $totalwords variable keeps track of how many guest blogger words are written, and the $totaldocs variable keeps track of the number of guest blogs. This portion of the script is shown here.

$findText = "guest blogger"

$i = 1

$totalwords = 0

$totaldocs = 0

Processing the documents

Now I begin to loop through the collection of documents by using the foreach statement. The Write-Progress cmdlet displays a progress bar to inform me about the percentage of completion. I use the FullName property from the FileInfo object (it contains the complete path to the Microsoft Word document) to open the document and store the returned Document object in the $document variable. This portion of the code is shown here.

Foreach ($doc in $docs)

{

 Write-Progress -Activity "Processing files" -status "Processing $($doc.FullName)" -PercentComplete ($i /$docs.Count * 100)

 $document = $application.documents.open($doc.FullName)

Note   More information about the Write-Progress cmdlet appears on the Hey, Scripting Guy! Blog.

Because this process can take a long time, the progress bar is an import feature of the script. The following image shows the progress bar in the Windows PowerShell ISE for Windows PowerShell 3.0.

Image of command output

The following code creates a Range object from the Content property from the Document object. Then the Find.Execute method searches for the string “guest blogger.” The variable $wordfound contains a Boolean value that is used to detect if a match occurs.

$range = $document.content

 $null = $range.movestart()

 $wordFound = $range.find.execute($findText,$matchCase,

  $matchWholeWord,$matchWildCards,$matchSoundsLike,

  $matchAllWordForms,$forward,$wrap)

  if($wordFound)

    {

If a match occurs, the full name of the file and the word count display to the output window. I then gather the total words and the total number of documents to display later. The output from the script is shown here.

Image of command output

Basic cleanup

One reason for avoiding COM objects from within the .NET Framework (there are many such reasons, as detailed in my Windows PowerShell 2.0 Best Practices book from Microsoft Press) is the cleanup involved. Resources are not automatically released. Each object must be specifically released. I then call the garbage collection service and remove the Application variable. Here is my cleanup routine for this script.

 #clean up stuff

[System.Runtime.InteropServices.Marshal]::ReleaseComObject($range) | Out-Null

[System.Runtime.InteropServices.Marshal]::ReleaseComObject($document) | Out-Null

[System.Runtime.InteropServices.Marshal]::ReleaseComObject($application) | Out-Null

Remove-Variable -Name application

[gc]::collect()

[gc]::WaitForPendingFinalizers()

This is a rather long and complicated script, but the point (other than being cool) is to illustrate an automation model for working with the Microsoft Word. I have uploaded the complete script to the Scripting Guys Script Repository.

Join me tomorrow when I will talk about working with Microsoft Word document metadata.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Counting PowerShell Cmdlets

$
0
0

Summary: PowerTip for counting the number of cmdlets in your Windows PowerShell installation

Question: How many cmdlets are available in a default Windows PowerShell 3.0 installation in Windows 8?

Answer: 403

Question: How did you find out how many cmdlets are available in the default Windows PowerShell installation?

Answer: Get-Module -ListAvailable | Import-Module ; gcm -co cmdlet | measure


Windows Server 2012 Released to Manufacturing

$
0
0

Today is a huge day because Windows Server 2012 and Windows 8 both released to manufacturing. I have been playing with both for a long time, and of course I love Windows PowerShell 3.0 which is built in to both products. The new CIM cmdlets, the additional cmdlets for managing nearly everything make using Windows PowerShell a dream. Both Windows 8 and Windows Server 2012 will be up on MSDN and on TechNet for subscribers soon. See the Windows 8 blog and the Windows Server blog for details.

Both new products will play a prominent role in the upcoming PowerShell Saturday’s in Charlotte and in Atlanta.

Stay tuned as I begin to explore the goodness exposed by Windows PowerShell 3.0 in our latest operating systems.

Ed Wilson

Use PowerShell to Find Specific Word Built-in Properties

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to find specific built-in properties from Word documents.

Microsoft Scripting Guy, Ed Wilson, is here. Well the script for today took a bit of work … actually it took quite a bit of work. The script does the following:

  • Searches a specific folder for Word documents
  • Creates an array of specific Word document properties from the Word built-in document properties enumeration. The built-in Word properties are listed on MSDN.
  • Retrieves the specific built-in Word properties and their associated value
  • Creates a custom Windows PowerShell object with each of the specified properties, in addition to the full path to the Word document

Today’s script is similar to the Find All Word Documents that Contain a Specific Phrase script from yesterday, so reviewing that posting would be a good thing to do. This script also accomplishes a few of the things I wanted to do in yesterday’s script that I did not get a chance to do—namely, I return a custom object that contains the built-in properties I choose. This is a great benefit because it permits further analysis and processing of the data­­—and it would even permit export to a CSV file if I wish.

Working with Word Document properties

It is very difficult to work with Word document properties, and I have written several blogs about this. You should refer to those blogs for additional information. The first thing I do is create a couple of command-line parameters. This permits changing the path to search, as well as modifying the include filter that is used by the Get-ChildItem cmdlet. Next, I create the Word.Application object and set it to be invisible. Next, I need to create BindingFlags and WdSaveOptions. The reason for creating WdSaveOptions is to keep Word from modifying the last save option on the Word files. Finally, I obtain a collection of fileinfo objects and store the returned objects in the $docs variable. This portion of the script is shown here.

Param(

  $path = "C:\fso",

  [array]$include = @("HSG*.docx","WES*.docx"))

$AryProperties = "Title","Author","Keywords", "Number of words", "Number of pages"

$application = New-Object -ComObject word.application

$application.Visible = $false

$binding = "System.Reflection.BindingFlags" -as [type]

[ref]$SaveOption = "microsoft.office.interop.word.WdSaveOptions" -as [type]

$docs = Get-childitem -path $Path -Recurse -Include $include 

Now I need to walk through the collection of documents. I use the foreach statement. Inside the foreach loop, I open each document,and return the BuiltInDocumentProperties collection. I also create a hash table that I will use to create the custom object later in the script. This portion of the code is shown here.

Foreach($doc in $docs)

 {

  $document = $application.documents.open($doc.fullname)

  $BuiltinProperties = $document.BuiltInDocumentProperties

  $objHash = @{"Path"=$doc.FullName}

It is time to work through the array of built in properties that I selected earlier. To do this, once again I use a foreach statement. I use Try  when attempting to access each built-in property because an error generates if the property contains no value. I already know the name of the property that I desire to obtain; therefore, I use it directly when obtaining the value of the property. Both the name and the value of the built-in document properties are assigned to the hash table as a keyvalue pair. If an error occurs, I print a message via Write-Host that the value was not found. I use Write-Host for this so I can specify the color (blue). The code is shown here.

foreach($p in $AryProperties)

    {Try

     {

      $pn = [System.__ComObject].invokemember("item",$binding::GetProperty,$null,$BuiltinProperties,$p)

      $value = [System.__ComObject].invokemember("value",$binding::GetProperty,$null,$pn,$null)

      $objHash.Add($p,$value) }

     Catch [system.exception]

      { write-host -foreground blue "Value not found for $p" } 

I then create a new custom PSObject and use the hash table for the properties of that object. I display that object, and close the Word document without saving any changes. Finally, I release the document object and the BuiltInProperties object, and I continue to loop through the collection of documents. This code is shown here.

   $docProperties = New-Object psobject -Property $objHash

   $docProperties

   $document.close([ref]$saveOption::wdDoNotSaveChanges)

   [System.Runtime.InteropServices.Marshal]::ReleaseComObject($BuiltinProperties) | Out-Null

   [System.Runtime.InteropServices.Marshal]::ReleaseComObject($document) | Out-Null

   Remove-Variable -Name document, BuiltinProperties

   }

 When I have completed processing the collection of documents, I release the Word.Application COM object and call garbage collection. This code is shown here.

$application.quit()

[System.Runtime.InteropServices.Marshal]::ReleaseComObject($application) | Out-Null

Remove-Variable -Name application

[gc]::collect()

[gc]::WaitForPendingFinalizers() 

Using the returned objects

One reason for returning an object is that it allows for grouping, sorting, and for further processing. I could have written everything in a function, but it works just as well as a script. For example, when I run the script, it returns the following objects.

PS C:\> C:\data\ScriptingGuys\2012\HSG_7_30_12\Get-SpecificDocumentProperties.ps1

Path            : C:\fso\HSG-7-23-12.docx

Number of words : 1398

Number of pages : 4

Author          : edwils

Keywords        :

Title           :

 

Path            : C:\fso\HSG-7-24-12.docx

Number of words : 1035

Number of pages : 4

Author          : edwils

Keywords        : guest blogger, powershell

Title           :  

Because the objects return from the script, I can search the output and find only documents that contain the word “guest blogger” as shown here.

PS C:\> C:\data\ScriptingGuys\2012\HSG_7_30_12\Get-SpecificDocumentProperties.ps1 | where keywords -match "guest blogger"

 

Path            : C:\fso\HSG-7-24-12.docx

Number of words : 1035

Number of pages : 4

Author          : edwils

Keywords        : guest blogger, powershell

Title           : 

It is even possible to modify the way the output appears and to split only the file name from the remainder of the path. This is shown here.

PS C:\> C:\data\ScriptingGuys\2012\HSG_7_30_12\Get-SpecificDocumentProperties.ps1 | sort "number of words" -Descending | select @{LABEL="file";EXPRESSION={split-path $_.path -Leaf}}, "number of words", author, keywords | ft -AutoSize

 

file             Number of words Author Keywords                

----             --------------- ------ --------                

HSG-7-23-12.docx            1398 edwils                          

HSG-7-27-12.docx            1208 edwils                         

HSG-8-2-11.docx             1206 edwils                         

hsg-9-28-11.docx            1131 edwils                         

HSG-7-24-12.docx            1035 edwils guest blogger, powershell

HSG-8-1-11.docx              963 edwils                         

HSG-7-25-12.docx             882 edwils                         

HSG-7-26-12.docx             848 edwils                         

 

PS C:\>  

The complete Get-SpecificDocumentProperties.ps1 script is on the Scripting Guys Script Repository. 

Join me tomorrow when I will talk about programmatically assigning values to the Word documents.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Read Only and Constant Variables

$
0
0

Summary: Learn the difference between a read-only variable and a constant.

Question: What is the difference between a read-only variable and a constant?

Answer: A read-only variable is one with content that is read-only. It can, however, be modified by using the Set-Variable cmdlet with the –force parameter. It can also be deleted by using Remove-Variable –force.  A constant variable, however, cannot be deleted, nor can it be modified--even when using the force.

 

Use PowerShell to auto complete Word built-in properties

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson shows how to use Windows PowerShell to automatically complete the Microsoft Word built-in properties.

Microsoft Scripting Guy, Ed Wilson, is here. Well registration for the Charlotte PowerShell Saturday event to be held on September 15, 2012 is underway. (Incidentally, that is the day after my birthday. The Speakers dinner at the Scripting Wife’s and my house, will actually be a Scripting Guy birthday party). We have all speakers, except for one, identified, and you can see the speakers and the tracks via the web site. If you are going to be anywhere near Charlotte, North Carolina in the United States on September 15, 2012 you should make plans to attend. With three tracks and over a dozen Microsoft PFE, Microsoft MVP, and community leaders speaking it will be an event too good to pass up. I, myself, am doing two or three presentations on both the beginner and the advanced track. It will be cool. Here is the link to register for this awesome PowerShell event.

Use PowerShell to add to Word metadata

NOTE: This week I have been talking about finding files and working with the associated built in properties of those files (Microsoft Word). On Monday I talked about Use PowerShell to Explore Nested Directories and Files on Tuesday I wrote Use PowerShell to Help Find all of your Images and on Wednesday we began our deep dive into the Microsoft Word automation model when I wrote Find All Word Documents that Contain a Specific Phrase which was followed up with on Use PowerShell to Find Specific Word Built-in Properties Thursday.

Once I figured out how to find specific Microsoft Word documents by using the Microsoft Word built-in properties, I thought it would be a useful technique. Potentially, it could be quicker, and more accurate to use these built-in properties than to try to use Regular Expressions to search Word documents for specific word patterns. It is easier to search for specific words in specific Microsoft Word styles, but all of my documents do not always use standard Microsoft Word styles – and therefore that technique does not work so well. If, on the other hand, I have populated accurately the built-in properties on a Microsoft Word document, I know I can search for them via the technique I developed in yesterday’s Hey Scripting Guy! Blog article. In a previous, Hey Scripting Guy! Blog article I talked about manually adding values to the Microsoft Word built-in properties. Today I want to talk about adding values to the Microsoft Word built-in properties. To be useful, as a document management technique, I will need to be able to have my script figure out what to add. This can involve lots of regular expressions, and other things. Based upon my writing of this series, I have decided to modify my Microsoft Word template: the title goes in the Title style, and the summary goes as a subtitle style; and I use the Heading 9 for my tags. But, of course, while this will help in the future, it does not do much for me today. I do not want to complicate overly the script for today because my main purpose is to illustrate the complicated task of actually writing to a Microsoft Word built-in property. I also noticed, yesterday, when I was messing around with my script, that because all of my data directory was copied from my backup stored on my SAN I have at home, then the file creation dates are all messed up. This includes the Content Created built-in property as well as the Date last saved built-in property. In addition, the actual file timestamps, Date Created, Date modified, and Date accessed are similarly unreliable.

NOTE: I created a function in the Use PowerShell to Modify File Access Time Stamps blog post. Using that function, it is easy to change the file time stamps. That will be the topic for tomorrows Weekend Scripter blog posting.

Writing to Microsoft Word built-in properties via PowerShell

MSDN details the built-in properties for Microsoft Word.  Today, I want to assign a value to the comments built-in property.

The first part of the Set-SpecificDocumentProperties script appears similar to the script from yesterday’s Hey Scripting Guy blog article. The difference is two new variables. The thing to keep in mind here is that the $AryProperties and the $newValue are both specified as an [array] but they are actually singletons. The reason for this is because the SetProperty method used to write the values back to the BuiltInDocumentProperties collection must receive an array. Other than that, this code is relatively straightforward.

Param(

  $path = "C:\fso", [array]$include = @("HSG*.doc*","WES*.doc*"))

[array]$AryProperties = "Comments"

[array]$newValue = "Scripting Guy blog"

$application = New-Object -ComObject word.application

$application.Visible = $false

$binding = "System.Reflection.BindingFlags" -as [type]

$docs = Get-childitem -path $Path -Recurse -Include $include

 

Now I use the foreach statement to walk through the collection of documents retrieved by the Get-ChildItem cmdlets. Inside the scriptblock for the command, the first thing I do is open the document and store the returned document object in the $document variable. Next I retrieve the BuiltInDocumentProperties object and store it in the $builtinProperties variable. Next I use the gettype method to return the BuiltInDocumentProperties  type and I store that in the $builtinPropertiesType variable. I could also use [system.__ComObject] like I did yesterday, but I thought I would show you a different technique that is perhaps a bit more readable. Here is the code.

Foreach($doc in $docs)

 {

  $document = $application.documents.open($doc.fullname)

  $BuiltinProperties = $document.BuiltInDocumentProperties

  $builtinPropertiesType = $builtinProperties.GetType()

Once again, (just like in yesterday’s script) I use the Try / Catch commands to attempt to write new values for the properties. If an exception occurs, a blue string displays stating the script was unable to set a value for the property.

In the try script block the first thing I do is get the built-in property and assign it to the $BuiltInProperty variable. To do this, I use the invokemember method on the item with the GetProperty binding. I also include the $builtinProperties variable that contains the BuiltInProperties collection. I store the returned property object in the $BuiltInProperty variable. Next I use the Gettype method to return the datatype and I store that in the $BuiltInPropertyType variable. These two lines of code appear here (the first line is really long and wraps).

$BuiltInProperty = $builtinPropertiesType.invokemember("item",$binding::GetProperty,$null,$BuiltinProperties,$AryProperties)

$BuiltInPropertyType = $BuiltInProperty.GetType()

I now call the setproperty binding for the value using code that is similar to the previous line of code. Once again, the new value must be supplied in an array.

$BuiltInPropertyType.invokemember("value",$binding::SetProperty,$null,$BuiltInProperty,$newValue)}

Inside the loop it is now time to close the document, and release a the COM objects and remove the variables. This code appears here.

   $document.close()

   [System.Runtime.InteropServices.Marshal]::ReleaseComObject($BuiltinProperties) | Out-Null

   [System.Runtime.InteropServices.Marshal]::ReleaseComObject($document) | Out-Null

   Remove-Variable -Name document, BuiltinProperties

   }

Once the script finishes looping through the documents, the final cleanup occurs. This code appears here.

$application.quit()

[System.Runtime.InteropServices.Marshal]::ReleaseComObject($application) | Out-Null

Remove-Variable -Name application

[gc]::collect()

[gc]::WaitForPendingFinalizers()

The default option of the close method is to save o the Word Document. You can use the wdSaveChanges value from the WdSaveOptions enumeration as well. MSDN documents the WdSaveOptions enumeration, but it is easy to use Windows PowerShell to find this information as well by using Get-Member –static on the variable that contains the enumeration type. The thing that is really weird is that the Interop requires that the save option is passed by reference. This is the reason for the [ref] type constraint in front of the $saveOption variable.

 I uploaded the complete Set-SpecificDocumentProperties.ps1 script to the Scripting Guys Script Repository.  When you download the zip file, make sure to unblock the file prior to attempting to run the script, or else the script execution policy will prohibit the script from running. For more information on this, refer to the following Scripting Wife article.

Join me tomorrow for the Weekend Scripter when I will talking about parsing file names, and creating datetime objects based upon the file name. It is cool.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Finding Specific Cmdlets

$
0
0

Summary: Learn about important Windows PowerShell cmdlets and how to find them.

Question: What are the three MOST important cmdlets?

    Answer: The three most cmdlets are: Get-Command, Get-Help and Get-Member.

    Question: Which cmdlet can I use to work with event logs?

      Answer: To work with event logs, use the Get-EventLog cmdlet, or the Get-WinEvent cmdlet.

      Question: How did you find that cmdlet?

      Answer: Get-Command -Noun *event*

      Viewing all 3333 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>