Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Use PowerShell to Create New Registry Keys on Remote Systems

$
0
0

Summary: Learn how to use Windows PowerShell to create new registry keys on remote systems by using remoting.

Hey, Scripting Guy! Question Hey, Scripting Guy! I need to create registry keys on a number of remote servers. I do not want to use the Registry Editor to do this because the servers all have the firewall enabled, and I do not want to enable the remote registry service on these machines. Is there a way I can use Windows PowerShell to create these registry keys? I really do not want to use remote desktop to do this.

—YH

Hey, Scripting Guy! Answer Hello YH,

Microsoft Scripting Guy, Ed Wilson, is here. I just finished a team meeting where we were talking about documentation for the Windows Server “8” Beta release—really exciting stuff. I can tell you that Microsoft is the most exciting company for which I have ever worked. It is everything I hoped for when I joined nearly 11 years ago—and even more. I am dying to start writing about Windows PowerShell 3.0 and Windows Server “8” Beta, but I imagine that most IT Pros have not yet deployed it to their systems, so I will forgo that until the product actually ships.

I am sipping a cup of Darjeeling tea with some lemon grass and a cinnamon stick, accompanied by a slice of 90 percent cacao chocolate. It is the perfect afternoon snack. This week is turning out to be a lot of fun. Tonight, the Scripting Wife and I are having dinner with one of the members of the Charlotte Windows PowerShell Users Group. We will be talking about upcoming user group presentations. It is always a lot of fun to get together with fellow Windows PowerShell geeks because the conversations are usually $cool.

Note   This is the third blog in a series of Hey, Scripting Guy! Blogs that discuss using the Registry provider. The first blog, Use the Registry Provider to Simply Registry Accessposted on Monday. Tuesday I discussed using the *restore* cmdlets to perform a system state backup of a computer prior to manipulating the registry. On Wednesday I talked about creating new registry keys and assigning default values. For additional information about working with the registry via Windows PowerShell, see this collection of blogs.

YH, there are several ways to create registry keys on remote systems. One way is to use the .NET Framework classes and another way is to use WMI. By far, the easiest way is to combine using the Windows PowerShell Registry provider with Windows PowerShell remoting. This is because Windows PowerShell remoting uses the firewall friendly WSMan protocol. In fact, inside a single forest, single domain with Windows Server “8” Beta, remoting just works. There are no configuration changes required to enable Windows PowerShell remoting in Windows Server “8” Beta.

Entering a remote session to create a new registry key

If you only have a single computer upon which you need to create one or more new registry keys, it is probably easiest to enter a remote Windows PowerShell session. Doing this provides you with the equivalent experience of opening the Windows PowerShell console and working in an interactive fashion. There are several steps involved, but they are not too difficult.

Only the steps…

Entering a remote Windows PowerShell session to create a new registry key:

  1. Use the Get-Credential cmdlet to obtain a credential object with rights on the remote computer. Store the returned credential object in a variable.
  2. Use the Enter-PSSession cmdlet to enter a remote Windows PowerShell session on the target computer.
  3. Use the New-Item cmdlet to create the new registry key.
  4. Use the Exit command to leave the remote Windows PowerShell session.

The commands to obtain credentials, enter a Windows PowerShell session, create a new registry key, and leave the Windows PowerShell session are shown here.

$credential = Get-Credential -Credential iammred\administrator

Enter-PSSession -ComputerName sql1 -Credential $credential

New-Item -Path HKCU:\Software -Name HSG -Value "my default value"

Exit

The use of these commands and the associated output are shown in the image that follows.

Image of command output

I use Remote Desktop Protocol (RDP) to connect to the SQL1 server to verify creation of the registry key. Keep in mind that when you make a remote connection and create keys on the HKCU drive, the user account that makes the remote connection is the current user who will hold the newly created registry key. The newly created registry key is shown in the image that follows.

Image of Registry Editor

Creating remote registry keys on multiple computers

Entering a remote Windows PowerShell session to create a registry key on multiple computers is a tedious and time-consuming process. To perform a command on multiple machines, it is better to use the Invoke-Command cmdlet.

Only the steps…

Using the Invoke-Command cmdlet to create remote registry keys:

  1. Store the server names in a variable.
  2. Store the connection credentials in a variable (use the Get-Credential cmdlet to obtain the credentials).
  3. Use the Invoke-Command cmdlet to run the command against the remote machines. Place the command to be run in the ScriptBlock parameter.

The following commands create a new registry key on the HKCU drive on three different servers.

$servers = "hyperv1","hyperv2","hyperv3"

$credential = Get-Credential -Credential iammred\administrator

Invoke-Command -ComputerName $servers -Credential $credential -ScriptBlock {New-Item -Path HKCU:\Software -Name hsg -Value "scripted default value"}

The commands and the output associated with running the commands are shown in the image that follows.

Image of command output

The returning output from the Invoke-Command cmdlet displays the computer name and illustrates the newly created registry key. However, if you wish further confirmation that the registry keys created properly, it is easy to change the New-Item command to Test-Path. Because the server names reside in a variable, in addition to the credentials, using Test-Path inside the script block of the Invoke-Command cmdlet is trivial. Here is the revised command.

Invoke-Command -ComputerName $servers -Credential $credential -ScriptBlock {Test-Path -Path HKCU:\Software\hsg}

The command and its associated output are shown in the image that follows.

Image of command output

Creating property values

It is unusual to have a registry key that has only a default property value. In fact, most registry keys contain multiple property values. To create a new property value, use the New-ItemProperty cmdlet. The following command creates a new property value named NewProperty under the previously created HSG registry key.

New-ItemProperty -Path HKCU:\Software\hsg -Name NewProperty -Value "New PropertyValue"

The newly created registry key property is shown in the image that follows.

Image of Registry Editor

Getting registry property values

When run, the command returns information about the registry key property. In fact, this is the same information that is obtained by using the Get-ItemProperty cmdlet. The use of the Get-ItemProperty cmdlet and the associated output from the command are shown here.

PS C:\> Get-ItemProperty -path HKCU:\Software\hsg -Name newproperty

PSPath       : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\hsg

PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software

PSChildName  : hsg

PSDrive      : HKCU

PSProvider   : Microsoft.PowerShell.Core\Registry

NewProperty  : New PropertyValue

The newly created registry property appears in the output under the name of the property. This means that to display only the value of the registry property requires essentially naming the property twice. The first time the registry property appears, it is the name of the registry property to obtain; the second time, it is the property to retrieve. This command and the associated output are shown here.

 PS C:\> (Get-ItemProperty -path HKCU:\Software\hsg -Name newproperty).newProperty

New PropertyValue

YH, that is all there is to using Windows PowerShell to create registry keys on remote systems. Registry Week will continue tomorrow when I will talk about enumerating registry key properties.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Use PowerShell to Enumerate Registry Property Values

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to enumerate all the properties and their values under a registry key.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a question. It seems that getting the property values under a registry key is a tedious process. It looks like I need to know the exact property value to find out its value. Often I do not have this information. I would like a way to say, “Just give me the property values and their associated value.” I do not want to do a lot of work to get this information. Is it possible?

—BW

Hey, Scripting Guy! Answer Hello BW,

Microsoft Scripting Guy, Ed Wilson, is here. I just checked on the status of an order of Monkey Picked Oolong Tea that the Scripting Wife ordered for me. It arrives tomorrow. Charlotte, North Carolina in the United States is not a huge tea market—but hey, I have great Internet access at home and that gives me access to some of the best tea brokers in the world. I first ran across Monkey Picked tea in a small tearoom while the Scripting Wife and I were on our way to the Mark Minasi Conference. I had the Monkey Picked Oolong; she had hot chocolate. But she remembered how much I kept going on and on about the light and delicate flavor, and next thing I know, we have a package arriving tomorrow.

Note   This is the third blog in a series of Hey, Scripting Guy! Blogs that discuss using the Registry provider. The first blog, Using the Registry Provider to Simply Registry Access posted on Monday. Tuesday I discussed using the *restore* cmdlets to perform a system state backup of a computer prior to manipulating the registry. On Wednesday I talked about creating new registry keys and assigning default values. In the fourth blog, I talked about creating new registry keys on remote computer systems. I also discussed creating registry property values. For additional information about working with the registry via Windows PowerShell, see this collection of blogs.

Working with registry property values

Note    For a VBScript version of this blog, see Hey, Scripting Guy! How Can I Retrieve All the Values in a Registry Key?

Because of the hierarchical nature of the registry in Windows, the file system metaphor sort of breaks down. On a file system drive, you can use the New-Item cmdlet to create a folder (directory), and then go back and create a file inside the folder by using the same cmdlet. With the registry, that is not the case. New-Item creates the registry keys, but it is the New-ItemProperty cmdlet that creates the properties that are associated with the registry keys. This concept is shown in the image that follows.

Image of Registry Editor

Yesterday’s Hey, Scripting Guy! Blog talked about retrieving registry properties. In that blog, I discussed using the Get-ItemProperty cmdlet to retrieve registry property values, in addition to the specific registry property value value. (I know, it gets a little redundant). But as you pointed out, BW, from the perspective of perusing preset registry values, knowing an exact registry property value name is not the most efficient way of doing things.

Enumerating registry property values

The image that follows illustrates the Winlogon registry key. This registry key has four registry properties (in addition to the Default registry property, which is not set in this example).

Image of Registry Editor

There are several steps involved in obtaining the value of the registry property values under a specific registry key.

Only the steps…

Enumerating registry property values:

  1. Use the Push-Location cmdlet to store the current working location.
  2. Use the Set-Location cmdlet to change the current working location to the appropriate registry drive.
  3. Use the Get-Item cmdlet to retrieve the properties of the registry key.
  4. Pipe the registry properties through the ForEach-Object cmdlet.
  5. In the script block of the ForEach-Object cmdlet, use the Get-ItemProperty cmdlet to retrieve the property values.
  6. Return to the original working location by using the Pop-Location cmdlet.

Note    When you are typing the path to the specific registry key, remember that you can use tab expansion. The use of the Windows PowerShell tab expansion feature not only saves time typing, but it also invariably saves time troubleshooting mistyped commands. It is essential that you train yourself to use tab expansion everywhere it is available.

The Get-RegistryKeyPropertiesAndValues.ps1 script follows the previous Enumerating registry property values steps, but it adds a bit of extra power to the equation by creating a custom object for each registry property/value combination. The script creates a new custom object for each key/property pair. To do this, the script creates the object inside the Foreach-Object cmdlet. The resulting object pipes to the Format-Table for display to the console. The script then returns to the original working location by using the Pop-Location cmdlet. The complete Get-RegistryKeyPropertiesAndValues.ps1 script is shown here.

Get-RegistryKeyPropertiesAndValues.ps1

Push-Location

Set-Location 'HKCU:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon'

Get-Item . |

Select-Object -ExpandProperty property |

ForEach-Object {

New-Object psobject -Property @{"property"=$_;

   "Value" = (Get-ItemProperty -Path . -Name $_).$_}} |

Format-Table property, value -AutoSize

Pop-Location

It is rather easy to convert the above script to a function. The Get-RegistryKeyPropertiiesAndValues function appears here.

Get-RegistryKeyPropertiesAndValues Function

 

Function Get-RegistryKeyPropertiesAndValues

{

  <#

   .Synopsis

    This function accepts a registry path and returns all reg key properties and values

   .Description

    This function returns registry key properies and values.

   .Example

    Get-RegistryKeyPropertiesAndValues -path 'HKCU:\Volatile Environment'

    Returns all of the registry property values under the \volatile environment key

   .Parameter path

    The path to the registry key

   .Notes

    NAME:  Get-RegistryKeyPropertiesAndValues

    AUTHOR: ed wilson, msft

    LASTEDIT: 05/09/2012 15:18:41

    KEYWORDS: Operating System, Registry, Scripting Techniques, Getting Started

    HSG: 5-11-12

   .Link

     Http://www.ScriptingGuys.com/blog

 #Requires -Version 2.0

 #>

 Param(

  [Parameter(Mandatory=$true)]

  [string]$path)

 Push-Location

 Set-Location -Path $path

 Get-Item . |

 Select-Object -ExpandProperty property |

 ForEach-Object {

 New-Object psobject -Property @{"property"=$_;

    "Value" = (Get-ItemProperty -Path . -Name $_).$_}}

 Pop-Location

} #end function Get-RegistryKeyPropertiesAndValues

To use the Get-RegistryKeyPropertiesAndValues function to obtain registry key properties and their associated values, pass the path to it. For example, the image that follows illustrates the Volatile Environment registry key. On the right are a large number of registry key properties.

Image of Registry Editor

Open the Windows PowerShell ISE and load the function by opening the script that contains it. Next load the function into memory by clicking the run button (or pressing F5). When it is loaded into memory, call the function by typing the function name in the immediate window and providing a path to a specific registry key. For this example, use the HKCU:\Volatile Environment registry key (make sure you put quotation marks around the entire path to the registry key). Use the Format-Table cmdlet to display the property name and then the property value. The following command illustrates a typical command line.

Get-RegistryKeyPropertiesAndValues -path 'HKCU:\Volatile Environment' | ft property, value -AutoSize

This technique is shown in the following image.

Image of command output

BW, that is all there is to obtaining the value of the registry property values under a specific registry key. Registry Week will continue tomorrow when I will talk about modifying registry property values.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Use PowerShell to Easily Modify Registry Property Values

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows to use the PowerShell registry provider to easily modify registry property values.

Microsoft Scripting Guy, Ed Wilson, is here. It is finally the weekend. It seems like it has been a rather long week. Of course, each week is only 168 hours long, but this one has seemed long. It is due in part to the travel for recent conferences, various meetings, and an unexpected dinner with Windows PowerShell tweeps. It has been raining quite a bit this week, and the air outside has a fresh spring rain smell to it. I am sitting on the lanai sipping a cup of Irish Breakfast tea and munching on a blueberry scone that the Scripting Wife made last night. I have set aside most of today to work on my Windows PowerShell 3.0 Step-by-Step book that will be published this summer with Microsoft Press. (Cool! I just noticed that you can preorder it on Amazon.)

Note   This is the sixth blog in a series of Hey, Scripting Guy! Blogs that discuss using the Registry provider. The first blog, Using the Registry Provider to Simply Registry Access posted on Monday. Tuesday I discussed using the *restore* cmdlets to perform a system state backup of a computer prior to manipulating the registry. On Wednesday I talked about creating new registry keys and assigning default values. In the fourth blog, I talked about creating new registry keys on remote computer systems. I also discussed creating registry property values. In the fifth blog, I created a function and a script that enumerate all the registry properties and their associated values. The function and script return a custom object that permits further work with the output by using standard Windows PowerShell functionality. For additional information about working with the registry via Windows PowerShell, see this collection of blogs.

Modifying the value of a registry property value

To modify the value of a registry property value requires using the Set-PropertyItem cmdlet.

Only the steps…

Modifying the value of a registry property value:

  1. Use the Push-Location cmdlet to save the current working location.
  2. Use the Set-Location cmdlet to change to the appropriate registry drive.
  3. Use the Set-ItemProperty cmdlet to assign a new value to the registry property.
  4. Use the Pop-Location cmdlet to return to the original working location.

In the image that follows, a registry key named hsg exists in the HKCU:\Software hive. The registry key has a property named NewProperty.

Image of Registry Editor

When you know that a registry property value exists, the solution is really simple. You use the Set-ItemProperty cmdlet and assign a new value. The code that follows saves the current working location, changes the new working location to the hsg registry key, uses the Set-ItemProperty cmdlet to assign new values, and then uses the Pop-Location cmdlet to return to the original working location.

The code that follows relies on positional parameters for the Set-ItemProperty cmdlet. The first parameter is Path. Because the Set-Location cmdlet sets the working location to the hsg registry key, a period identifies the path as the current directory. The second parameter is the Name of the registry property to change. In this example, it is NewProperty. The last parameter is Value, and that defines the value to assign to the registry property. In this example, it is mynewvalue. Thus, the command with complete parameter names would be:

Set-ItemProperty -Path . -Name newproperty -Value mynewvalue. The quotation marks in the code that follows are not required, but they do not harm anything either.

Here is the code:

PS C:\> Push-Location

PS C:\> Set-Location HKCU:\Software\hsg

PS HKCU:\Software\hsg> Set-ItemProperty . newproperty "mynewvalue"

PS HKCU:\Software\hsg> Pop-Location

PS C:\>

Of course, all the pushing and popping and setting of locations are not really required. It is entirely possible to change the registry property value from any location within the Windows PowerShell provider subsystem.

Only the step…

The short way to change a registry property value:

  1. Use the Set-ItemProperty cmdlet to assign a new value. Ensure that you specify the complete path to the registry key.

Here is an example of using the Set-ItemProperty cmdlet to change a registry property value without first navigating to the registry drive.

PS C:\> Set-ItemProperty -Path HKCU:\Software\hsg -Name newproperty -Value anewvalue

Dealing with a missing registry property value

If you need to set a registry property value, you can set the value of that property easily by using the Set-ItemProperty cmdlet. But what if the registry property does not exist? How do you set the property value then? You can still use the Set-ItemProperty cmdlet to set a registry property value, even if the registry property does not exist.

Set-ItemProperty -Path HKCU:\Software\hsg -Name missingproperty -Value avalue

To determine if a registry key exists is easy: Use the Test-Path cmdlet. It returns True if the key exists and False if it does not exist. This technique is shown here.

PS C:\> Test-Path HKCU:\Software\hsg

True

PS C:\> Test-Path HKCU:\Software\hsg\newproperty

False

But unfortunately, this technique does not work for a registry key property. It always returns False—even if the registry property exists. This is shown here.

PS C:\> Test-Path HKCU:\Software\hsg\newproperty

False

PS C:\> Test-Path HKCU:\Software\hsg\bogus

False

Therefore, if you do not want to overwrite a registry key property if it already exists, you need a way to determine if the registry key property exists—and using the Test-Path cmdlet does not work.

One of the cool things about writing the Hey, Scripting Guy! Blog is the interaction with the readers. One such reader, Richard, mentioned the problem of using Test-Path to determine if a registry property exists prior to calling the Set-ItemProperty. This is the technique he suggested, and it works great.

Only the steps…

Testing for a registry key property prior to writing a new value:

  1. Use the if statement and the Get-ItemProperty cmdlet to retrieve the value of the registry key property. Specify erroraction (ea is an alias) of SilentlyContinue (0 is the enumeration value).
  2. In the script block for the if statement, display a message that the registry property exists, or simply exit.
  3. In the else statement, call the Set-ItemProperty to create and to set the value of the registry key property.

This technique is shown here.

if((Get-ItemProperty HKCU:\Software\hsg -Name bogus -ea 0).bogus) {'Propertyalready exists'}

ELSE { Set-ItemProperty -Path HKCU:\Software\hsg -Name bogus -Value'initial value'}

The use of this technique appears in the image that follows. The first time, the bogus registry key property value does not exist and it is created. The second time, the registry key property already exists and a message to that effect appears.

Image of command output

Well, this concludes Registry Week. Tomorrow, I answer a question about how to continue with the spirit of the Scripting Games.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Ten Ways to Continue the Spirit of the PowerShell Scripting Games

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, answers the question about how to continue with the spirit of the Scripting Games.

Microsoft Scripting Guy, Ed Wilson, is here. I know it is the weekend, at least in Charlotte, North Carolina in the southern portion of the United States. But hey, I have been really busy this week, and I am behind on answering email sent to scripter@microsoft.com. So, I thought I would take today to answer one of my emails. Here we go…

Hey, Scripting Guy! Question Hey, Scripting Guy! I have to tell you that I am sort of in a daze. In fact, you might even say that I am becoming depressed. The reason? The 2012 Scripting Games are over, and I do not know what to do with all my spare time. I was used to going to work every day and checking out the Hey, Scripting Guy! Blog, reading about the Scripting Wife, keeping track of all the comments on the blogs and twitter…and now it is all over. Can you do another Scripting Games? I have really taken my Windows PowerShell to the next level, and I do not want to lose that momentum.  

—EB

 Hey, Scripting Guy! Answer Hello EB,

Microsoft Scripting Guy, Ed Wilson, is here. I can tell you for a fact, there will not be another Scripting Games this year. We will probably do one again next year, but it took me more than six months of planning for this year’s games, and it would be impossible to do that more than once a year. In addition to my time and efforts, more than 60 Microsoft employees, Microsoft MVPs, and other IT Pros rolled up their sleeves and pitched in to assist with grading and writing expert commentaries and blogs. The Scripting Games are a big deal, and they involve lots of work by lots of people.

So, EB, what can you do to continue the spirit of the Scripting Games? There are a number of ways to do this. In no particular order, here is a list.

  1. Join a local Windows PowerShell User Group. There is a master list of Windows PowerShell User Groups on the PowerShell Community Groups site. It lists user groups all over the world. There is even a virtual user group if a group does not exist in your area. Better yet, if a Windows PowerShell User Group does not exist in your area, start one. I have written about Windows PowerShell User Groups on several occasions. In one blog, Microsoft MVP, Tome Tanasovski, discussed why you might want to join a user group. Suffice it to say, joining a local Windows PowerShell User Group is a great way to continue the education you began with the Scripting Games.
  2. Follow the #PowerShell tag on Twitter. Many Windows PowerShell MVPs and other Windows PowerShell community leaders use Twitter, and they all (including the Scripting Wife and me) follow the #PowerShell tag. When you develop a large circle of friends on Twitter, you can quickly become overwhelmed by messages. By using a Twitter client (I am currently using MetroTwit, which sports a nice clean Windows Metro-type interface), you can create search columns that filter out tweets by search tags that people include in the messages. This is great when I am using the Twitter client on my Windows 7 Smartphone. On Twitter, you will see various people mention blogs about Windows PowerShell, videos that they have uploaded, Windows PowerShell User Group meetings, in addition to the general give-and-take of people asking questions. It is a great way to become engaged in the community. By the way, my twitter handle is @ScriptingGuys, and the Scripting Wife’s handle is @ScriptingWife. We would love for you to follow us.
  3. Join the Scripting Guys Group on Facebook. There are over three-thousand members in that group, and it is a great way to interact with people who have similar interests.
  4. Become active in the Official Scripting Guys Forum. I talked about the value of becoming active in the forum in a blog that I wrote about how to learn Windows PowerShell. I gave an example from when I was first studying for the MCSE certification, but the lesson is still applicable. The forum is a great place to ask questions, and also a great place to learn Windows PowerShell better by helping others. There is an old adage (which is true), and it goes like this: “The best way to learn a subject is to teach it.” Answering questions on the Scripting Guys Forum is teaching—and a great way to learn.
  5. Share scripts on the Scripting Guys Script Repository. When you have shared a script with the world on the Scripting Guys Script Repository, you have entered an elite world of scripters because you are now giving back to the scripting community in a real and substantial way. It is also fun because now you get to see how well your script is received by the community. Hey, maybe it will become a Top Ten Script and be mentioned by me in a Hey, Scripting Guy! Blog.
  6. Watch the PowerScripting Podcast. The PowerScripting Podcast is a weekly podcast about Windows PowerShell that is recorded with a live audience. It is a lot of fun, and the interaction in the chat room makes it an enjoyable and substantial learning experience. The Scripting Wife does the scheduling for Microsoft MVP, Hal Rottenberg, and cohost, Jonathan Walz; and therefore, she is in the chat room each week. The advantage of recording live is that you have a chance to ask questions of the host via the chat room. Check it out.
  7. Check out the Scripting with Windows PowerShell site in the Scripting Guys Script Center. This site hosts a number of podcasts and two video series with associated quizzes that comprise a number of learning opportunities.
  8. Peruse the Scripting Community site. I list all of my upcoming appearances there. There is also an interactive map of Windows PowerShell User Groups in the U.S. and other resources.
  9. Read a book about Windows PowerShell. There are a number of very good books about Windows PowerShell on the market. Each book has a different emphasis, and I own many of the current releases. One of the things I hit pretty hard in the Scripting Games this year was Windows PowerShell best practices. Therefore, you may want to get a copy of my Windows PowerShell 2.0 Best Practices book. It was published by Microsoft Press, and it includes hundreds of scripts and dozens of sidebars written by the Windows PowerShell product group at Microsoft and the Windows PowerShell community. (By the way, if you are coming to TechEd 2012 in Orlando, I will be hosting the Windows PowerShell Best Practices “birds of a feather” session with cohost, Windows PowerShell MVP, Don Jones. It will be a great session.)
  10. Read the Hey, Scripting Guy! Blog on a daily basis. The Hey, Scripting Guy! Blog is one of the few blogs that publishes seven days a week, 365 days a year. This provides a substantial amount of new information about Windows PowerShell. In addition to being written in a causal and witty style, the content is excellent (if I do say so myself). Make it your Home page so you do not miss a single episode.  

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Discover the Easy Way to Begin Learning Windows PowerShell

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, looks at Windows PowerShell naming conventions to simplify learning Windows PowerShell.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a feeling that I am quickly falling behind the Windows PowerShell wave. I am an Exchange Server admin where I work, and we are planning our deployment of Exchange Server 2010. I was searching for information when I ran across some of your blogs about using Windows PowerShell with Exchange Server 2010. I quickly realized that I do not know very much about Windows PowerShell. Here is the problem, it seems like everyone else in the world knows way more about Windows PowerShell than I do. I know that when Windows Server “8” Beta ships, there will be even another version of Windows PowerShell. So should I wait until Windows Server “8” Beta and then learn the new Windows PowerShell, or should I begin learning the current version of Windows PowerShell?

—GB

Hey, Scripting Guy! Answer Hello GB,

Microsoft Scripting Guy, Ed Wilson, is here. The other night, the Scripting Wife and I had dinner with Brian who is a member of the Charlotte Windows PowerShell User Group. In addition to answering lots of questions about Windows PowerShell and providing a decent amount of career guidance, the conversation also lingered for a while about tea. I thought about writing a blog called The Fundamentals of Tea—and I might do so on my personal blog. The problem is how does one describe the taste of Earl Gray tea to someone who has never tasted it before? Perhaps the best thing is to teach someone how to properly brew a nice cup of tea, and then allow them to taste some of my favorite teas. After all, there are literally hundreds of teas in the world, but they generally fall into just a few tea families. As you experience the flavors, you can find the types of tea that interest you.

GB, the situation with learning Windows PowerShell is similar to learning about the world of tea. There are a few basic things you need to learn. A little grouping can be done to aid with the different types of Windows PowerShell cmdlets, and then from there, the process is one of exploring where your interests lie.

Windows PowerShell cmdlet naming helps you learn

One of the great things about Windows PowerShell is the verb-noun naming convention. In Windows PowerShell, the verbs indicate an action to perform, such as Set to make a change or Get to retrieve a value. The noun indicates the item with which to work such as a process or a service. By mastering the verb-noun naming convention, you can quickly hypothesize what a prospective command might be called. For example, if you need to obtain information about a process, and you know that Windows PowerShell uses the verb Get  to retrieve information, you surmise that the command might very well be Get-Process. To obtain information about services you try Get-Service…and once again you are correct.

Note   When “guessing” Windows PowerShell cmdlet names, always try the singular form first. Windows PowerShell prefers the singular form of nouns. It is not a design requirement, but it is a strong preference. Therefore, the cmdlets are named Get-Service, Get-Process, not Get-Services or Get-Processes.

To see the list of approved verbs, use the Get-Verb cmdlet. This is shown here.

Image of command output

There are nearly 100 approved verbs in Windows PowerShell 2.0. This is shown in the command that follows where the Measure-Object cmdlet returns the count of verbs.

PS C:\> (get-verb | Measure-Object).count

96

Analyzing Windows PowerShell verb grouping

With nearly 100 verbs, you may be asking yourself, “How does an array of 100 verbs assist in learning Windows PowerShell?” You would be justified in asking that question. Nearly 100 different and unrelated items are really difficult to learn. However, the Windows PowerShell team grouped the verbs. For example, analyzing the common verbs reveals a pattern. The common verbs are listed here.

PS C:\> Get-Verb | where { $_.group -match 'common'} | Format-Wide verb -AutoSize

Add     Clear  Close  Copy   Enter  Exit   Find   Format Get    Hide   Join   Lock

Move    New    Open   Pop    Push   Redo   Remove Rename Reset  Search Select Set

Show    Skip   Split  Step   Switch Undo   Unlock Watch

The pattern for the verbs emerges when you analyze the verbs, for example: Add/Remove, Enter/Exit, Get/Set, Select/Skip, Lock/Unlock, Push/Pop, and so on. By learning the pattern to the common verbs, you quickly gain a handle on the Windows PowerShell naming convention.

By using the Windows PowerShell verb grouping, you can determine where to focus your efforts. The following command lists the Windows PowerShell verb grouping.

PS C:\> Get-Verb | select group -Unique

 

Group

-----

Common

Data

Lifecycle

Diagnostic

Communications

Security

Other

Analyzing Windows PowerShell verb distribution

Another way to get a better handle on the Windows PowerShell cmdlets is to analyze the verb distribution. Although there are nearly 100 approved verbs (not to mention unapproved verbs that are used by rogue developers), only a fraction of them are utilized repeatedly, and some of them are not used at all in a standard Windows PowerShell installation. By using the Group-Object (group is an alias) and the Sort-Object cmdlets (sort is an alias), the distribution of the cmdlets quickly becomes evident. The following command shows the verb distribution.

Get-Command -CommandType cmdlet | group verb | sort count –Descending

The command and the output associated with the command are shown in the image that follows.

Image of command output

The preceding output makes it clear that most cmdlets only use a few of the verbs. In fact, most of the cmdlets use one of only 10 verbs. This is shown here.

PS C:\> Get-Command -CommandType cmdlet | group verb | sort count -Descending | selec

t -First 10

 

Count Name                      Group

----- ----                      -----

   46 Get                       {Get-Acl, Get-Alias, Get-AuthenticodeSignature, G...

   19 Set                       {Set-Acl, Set-Alias, Set-AuthenticodeSignature, S...

   17 New                       {New-Alias, New-Event, New-EventLog, New-Item...}

   14 Remove                    {Remove-Computer, Remove-Event, Remove-EventLog, ...

    8 Export                    {Export-Alias, Export-Clixml, Export-Console, Exp...

    8 Write                     {Write-Debug, Write-Error, Write-EventLog, Write-...

    7 Import                    {Import-Alias, Import-Clixml, Import-Counter, Imp...

    7 Out                       {Out-Default, Out-File, Out-GridView, Out-Host...}

    6 Add                       {Add-Computer, Add-Content, Add-History, Add-Memb...

    6 Start                     {Start-Job, Start-Process, Start-Service, Start-S...

In fact, of 236 cmdlets, 138 of the cmdlets use one of only 10 different verbs. This is shown here.

PS C:\> (Get-Command -CommandType cmdlet | measure).count

236

PS C:\> $count = 0 ; Get-Command -CommandType cmdlet | group verb | sort count -Descending |

select -First 10 | % {$count += $_.count ; $count}

46

65

82

96

104

112

119

126

132

138

Therefore, GB, all you need to do is to master 10 verbs and you will have a good handle on more than half of the cmdlets that ship with Windows PowerShell 2.0. Join me tomorrow when I will talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Find PowerShell Commands by Using the Get-Command Cmdlet

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use the Get-Command cmdlet in Windows PowerShell to find various commands.

Hey, Scripting Guy! Question Hey, Scripting Guy! It all seems so easy for you. You want to find something by using Windows PowerShell and ba-da-bing, it is there. But for those of us who have not been using Windows PowerShell since before it was ever released, what is the secret? I mean using Windows PowerShell does not seem hard—but there is so much of it. Every day, it seems that you perform this act, and it is like Windows PowerShell theatre. Of course, it works, but how did you find it in the first place? I hope I am making myself clear…I do not even know where to start.

—RS

Hey, Scripting Guy! Answer Hello RS,

Microsoft Scripting Guy, Ed Wilson, is here. Today the day seems late, or early, depending on one’s perspective. It is nearly noon, and the sun does not appear to have awakened. The skies are all gray, and a thin drizzle of rainfall is providing a nice drink of cool water for our parched trees and plants. The Scripting Wife and I took our tea on the lanai this morning as we discussed Microsoft TechEd 2012 in Orlando.

“You need to get everything planned,” she said. “TechEd will be here sooner than you think.”

“Actually, I think TechEd will arrive exactly on Monday, June 11. We have to have the Scripting Guys booth set up by noon on June 10. That is also when we will see Daniel Cruz, who is helping out in the booth. We may also get to see Rohn Edwards and Lido Paglia, the 2012 Scripting Games winners,” I said.

“It is going to be so much fun,” she replied, “And the TechEd party is going to be held at Universal Islands of Adventure theme park. How cool is that?”

“That is true, but I am really just looking forward to meeting people and to making new friends. For me, TechEd is all about the networking opportunities.”

“So, speaking of networking,” she began, “How are you doing about creating the Scripting Guys guest schedule?”

“Well, so far, I have Mark Minasi, Don Jones, Jeffery Hicks, Jeffrey Snover, and a few others scheduled to make guest appearances at the Scripting Guys booth. I also have a confirmation from O’Reilly press for my autograph session at their booth.”

“So are you going to let me in on the secret? Or are you going to keep it all to yourself?” she asked.

“Well, for now, I will keep it to myself,” I said with a smile.

“Well, then for now, I am heading out with my friends,” she said with a smile, “I think we are going to check out that new store that opened up near the interstate.”

And with no further ado, she was gone.

So, RS, I decided to head upstairs to check out the email sent to scripter@microsoft.com, and I ran across your email.

Note   Tomorrow night, May 16, 2012 at 9:30 PM Eastern Standard Time (-5 GMT), the two winners of the 2012 Scripting Games (Lido Paglia and Rohn Edwards), Jeffrey Snover, and myself all appear with Jonathan Walz and Hal Rottenberg on the PowerScripting Podcast. This event is recorded live, and there is a chat room set up so you can talk to your fellow listeners, in addition to asking questions of the guests. It will be a lot of fun, and you should not miss it if at all possible.

The easy way to find Windows PowerShell cmdlets

The first thing to keep in mind is that in Windows PowerShell, not everything is a cmdlet. There are language statements, functions, aliases, various objects (from the .NET Framework or COM), and even other executables—all of which are utilizable from within Windows PowerShell. In Windows 8 Consumer Preview, this means you have around 1000 commands from which to choose. In Windows 7, the situation is not quite so overwhelming, but still you need to know how to find what you want.

In Windows 7, much of the power of Windows PowerShell comes from WMI. (This is actually true in Windows 8 Consumer Preview also, but the WMI classes are exposed more directly.) For information about searching and working with WMI, see this collection of Hey, Scripting Guy! Blogs.

Note    In yesterday’s blog, Discover the Easy Way to Begin Learning Windows PowerShell, I talked about learning the Windows PowerShell Verb-Noun naming pattern as a way to develop an understanding of Windows PowerShell coverage. This technique will also aid in finding Windows PowerShell cmdlets.

Two cmdlets are essential for discovering Windows PowerShell commands. The first is the Get-Command cmdlet, and the second is the Get-Help cmdlet. At first glance, the Get-Command cmdlet might not appear to be all that useful. For example, you provide it with the name of a cmdlet, and basically what returns is the name of the cmdlet. This command is shown here.

Get-Command Get-Process

The command and the output associated with the command illustrate the problem. The default output does not appear to display much more information than you knew when you typed in the command. You figured it was a cmdlet, you knew the name of Get-Process, and the definition does not add much additional information.

Image of command output

But, remember, everything in Windows PowerShell is an object. In fact, the Get-Command cmdlet returns a CmdletInfo object. This is shown here.

PS C:\> Get-Command Get-Process | gm

 

   TypeName: System.Management.Automation.CmdletInfo

 

Name                MemberType     Definition

----                ----------     ----------

Equals              Method         bool Equals(System.Object obj)

GetHashCode         Method         int GetHashCode()

GetType             Method         type GetType()

ToString            Method         string ToString()

CommandType         Property       System.Management.Automation.CommandTypes Comm...

DefaultParameterSet Property       System.String DefaultParameterSet {get;}

Definition          Property       System.String Definition {get;}

HelpFile            Property       System.String HelpFile {get;}

ImplementingType    Property       System.Type ImplementingType {get;}

Module              Property       System.Management.Automation.PSModuleInfo Modu...

ModuleName          Property       System.String ModuleName {get;}

Name                Property       System.String Name {get;}

Noun                Property       System.String Noun {get;}

OutputType          Property       System.Collections.ObjectModel.ReadOnlyCollect...

Parameters          Property       System.Collections.Generic.Dictionary`2[[Syste...

ParameterSets       Property       System.Collections.ObjectModel.ReadOnlyCollect...

PSSnapIn            Property       System.Management.Automation.PSSnapInInfo PSSn...

Verb                Property       System.String Verb {get;}

Visibility          Property       System.Management.Automation.SessionStateEntry...

DLL                 ScriptProperty System.Object DLL {get=$this.ImplementingType....

HelpUri             ScriptProperty System.Object HelpUri {get=try...

Based on the members of the CmdletInfo object, there appears to be a lot of information available about the cmdlet. The easiest way to view this information is to pipe the output from the Get-Command cmdlet to the Format-List cmdlet and to use a wild card character to choose all available properties. The command is shown here (fl is an alias for the Format-List cmdlet).

Get-Command Get-Process | fl *

The command, and the output associated with the command are shown in the image that follows.

Image of command output

In yesterday’s Hey, Scripting Guy! Blog, I talked about working with Windows PowerShell verbs as a way of understanding available commands. When you know and understand the various verbs, using the Get-Command cmdlet becomes much more valuable. For example, when you are looking for information about various items, you know you will more than likely use the get verb. Therefore, use the Get-Command cmdlet to retrieve only cmdlets that use the get verb. This command is shown here.

Get-Command –verb get

If you seek cmdlets that will assign a new value to something, you more than likely are looking for a cmdlet that uses the set verb. The following command retrieves these types of cmdlets.

Get-Command –verb set

Use nouns for cmdlets

One of the things that tends to confuse beginners is the difference between the name and the noun parameters. The Get-Command cmdlet has multiple parameter sets, and when you use the verb or the noun parameter, the Get-Command cmdlet only returns cmdlets. If you use the name parameter, Get-Command finds cmdlets, executables, functions, aliases, and other types of commands.

A good way to find commands is to use wild cards. For example, the following command returns any command containing the letters adapter within the name of the command. The output reveals that one command meets this filter.

PS C:\> gcm -Name *adapter*

CommandType     Name                               Definition

-----------     ----                               ----------

Application     AdapterTroubleshooter.exe          C:\Windows\system32\AdapterTro...

Note   Because the Get-Command cmdlet returns more than Windows PowerShell cmdlets, I often use it to help me locate various Windows executables. Because the Windows PowerShell console is generally open on my computer, it is faster for me to use than to use Windows Search.

In the following image, I first look for commands related to process. The first results (obtained by using the name parameter) contain a number of applications, in addition to several Windows PowerShell cmdlets. When I limit the results to only cmdlets that have a noun related to process the results are more directed.

Image of command output

RS, that is all there is to using the Get-Command cmdlet to search for cmdlets. Join me tomorrow when I will talk about additional ways to use Get-Command. It will be a very cool blog (and one not only for beginners).

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use the Get-Command PowerShell Cmdlet to Find Parameter Set Information

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use the Windows PowerShell Get-Command cmdlet to discover information about parameter sets.

Hey, Scripting Guy! Question Hey, Scripting Guy! One thing I don’t understand is parameter sets. I have heard about them, but I do not really find a good source of documentation about them. What is up with these things?

—BB

Hey, Scripting Guy! Answer Hello BB,

Microsoft Scripting Guy, Ed Wilson, is here. Tonight is the night. At 9:30 PM Eastern Standard Time (-5 GMT) the two winners of the 2012 Scripting Games (Lido Paglia and Rohn Edwards), Jeffrey Snover, and myself appear with Jonathan Walz and Hal Rottenberg on the PowerScripting Podcast. This event is recorded live, and there is a chat room set up so you can talk to your fellow listeners and as ask questions of the guests. Last year it was a lot of fun, and it should be a lot of fun this year too. Jeffrey Snover is always very interesting to listen to, and he is a great guy to talk to.

Understanding parameter sets

BB, parameter sets are different ways of using a Windows PowerShell command. For example, there are three ways to use the Get-Process cmdlet. The one most everyone knows about is using Get-Process to obtain information about a process by passing it a name. In fact, this is the default parameter set, and you do not need to do anything special to use this parameter set.

Get-Process notepad

You can use the Get-Command cmdlet to provide information about the default parameter set for a cmdlet. The following commands retrieve the default parameter set for the Get-Process cmdlet.

PS C:\> $a = gcm Get-Process

PS C:\> $a.DefaultParameterSet

Name

To view the other parameter sets, you can query the ParameterSets property. As illustrated here, the Get-Process cmdlet has three different parameter sets.

PS C:\> $a = gcm Get-Process

PS C:\> $a.ParameterSets | select name

 

Name

----

Name

Id

InputObject

Now BB, you may ask, “What is the big deal with a parameter set?” For one thing, it provides different ways of working with the same command. In addition, it prevents potential errors. Refer back to the Get-Process cmdlet. An error would potentially arise if you specified that you wanted to see processes that had the name of calc.exe, but you specified a process ID that related to the notepad.exe process. Therefore, the name parameter set permits the name parameter, but excludes the id parameter. This is why, at times, you might see the following error message: Parameter set cannot be resolved

As shown here, the message returns when you use parameters that when taken together, do not map to one specific parameter set.

Image of error message

One of the best ways to see the different parameter sets is to examine the output from the Get-Help cmdlet. Each parameter details separately. This is shown in the image that follows.

Image of command output

Working with Get-Command

One of the problems with the Get-Command cmdlet is that the command definition returns as “unstructured” text. But because you are working with Windows PowerShell, it is rather easy to correct this situation. The string data is shown here.

PS C:\> $a = gcm get-process

PS C:\> $a.Definition

Get-Process [[-Name] <String[]>] [-ComputerName <String[]>] [-Module] [-FileVersionI

nfo] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPr

eference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <Stri

ng>] [-OutBuffer <Int32>]

Get-Process -Id <Int32[]> [-ComputerName <String[]>] [-Module] [-FileVersionInfo] [-

Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPreferenc

e>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <String>] [-

OutBuffer <Int32>]

Get-Process [-ComputerName <String[]>] [-Module] [-FileVersionInfo] -InputObject <Pr

ocess[]> [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <Acti

onPreference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <

String>] [-OutBuffer <Int32>]

When attempting to index into the parameter set, individual letters return. This problem appears here.

PS C:\> $a.Definition[0]

G

But using the Split operator turns the output into an array. The `r character is a special character that represents a carriage return. By splitting on these carriage returns, an array with three elements returns to the Windows PowerShell command prompt as shown here.

PS C:\> $a.Definition -split "`r"

Get-Process [[-Name] <String[]>] [-ComputerName <String[]>] [-Module] [-FileVersionI

nfo] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPr

eference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <Stri

ng>] [-OutBuffer <Int32>]

 

Get-Process -Id <Int32[]> [-ComputerName <String[]>] [-Module] [-FileVersionInfo] [-

Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPreferenc

e>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <String>] [-

OutBuffer <Int32>]

 

Get-Process [-ComputerName <String[]>] [-Module] [-FileVersionInfo] -InputObject <Pr

ocess[]> [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <Acti

onPreference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <

String>] [-OutBuffer <Int32>]

Storing the output in a variable makes it easy to index into the array and retrieve a specific parameter set. This technique appears here.

PS C:\> $gps = $a.Definition -split "`r"

PS C:\> $gps[0]

Get-Process [[-Name] <String[]>] [-ComputerName <String[]>] [-Module] [-FileVersionI

nfo] [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPr

eference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <Stri

ng>] [-OutBuffer <Int32>]

PS C:\> $gps[1]

 

Get-Process -Id <Int32[]> [-ComputerName <String[]>] [-Module] [-FileVersionInfo] [-

Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <ActionPreferenc

e>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <String>] [-

OutBuffer <Int32>]

PS C:\> $gps[2]

 

Get-Process [-ComputerName <String[]>] [-Module] [-FileVersionInfo] -InputObject <Pr

ocess[]> [-Verbose] [-Debug] [-ErrorAction <ActionPreference>] [-WarningAction <Acti

onPreference>] [-ErrorVariable <String>] [-WarningVariable <String>] [-OutVariable <

String>] [-OutBuffer <Int32>]

The ParameterSetInfo object

In addition to working with the definition of a Windows PowerShell cmdlet, the ParameterSets property contains an array of CommandParameterSetInfo objects. The following command stores the object in a temporary variable, and then displays the members of the CommandParameterSetInfo object.

PS C:\> $a = get-command get-process

PS C:\> $a.ParameterSets | gm

   TypeName: System.Management.Automation.CommandParameterSetInfo

 

Name        MemberType Definition

----        ---------- ----------

Equals      Method     bool Equals(System.Object obj)

GetHashCode Method     int GetHashCode()

GetType     Method     type GetType()

ToString    Method     string ToString()

IsDefault   Property   System.Boolean IsDefault {get;}

Name        Property   System.String Name {get;}

Parameters  Property   System.Collections.ObjectModel.ReadOnlyCollection`1[[Syste...

Because the ParameterSets property returns a collection, it is possible to index directly into the collection and return information from it. However, indexing into the collection produces an output that seems a bit strange. This output is shown in the image that follows.

Image of command output

When you pipe a single instance of the ParameterSets to the Get-Member cmdlet, everything still appears to be in the Parameters property. Therefore, it is time to use one of my favorite tricks (see favorite trick #3)—the ExpandProperty parameter  from the Select-Object cmdlet. After I do this, everything falls into place. The two commands I use are shown here.

$a = get-command get-process

$a.ParameterSets[0] | select -ExpandProperty parameters

The commands, and the associated output are shown here.

Image of command output

Now it is trivial to see all of the parameters that are associated with a specific parameter set. In addition, the parameter aliases appear.

Note   For more information about parameter aliases, see Weekend Scripter: Discovering PowerShell Cmdlet Parameter Aliases.

PS C:\> $a = get-command get-process

PS C:\> $a.ParameterSets[0] | select -ExpandProperty parameters | ft name, ismandator

y, aliases

 

Name                                         IsMandatory Aliases

----                                         ----------- -------

Name                                               False {ProcessName}

ComputerName                                       False {Cn}

Module                                             False {}

FileVersionInfo                                    False {FV, FVI}

Verbose                                            False {vb}

Debug                                              False {db}

ErrorAction                                        False {ea}

WarningAction                                      False {wa}

ErrorVariable                                      False {ev}

WarningVariable                                    False {wv}

OutVariable                                        False {ov}

OutBuffer                                          False {ob}

BB, that is all there is to working with parameter sets. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Finding and Using Windows PowerShell Documentation

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about sources of Windows PowerShell documentation.

Hey, Scripting Guy! Question Hey, Scripting Guy! I like Windows PowerShell, but it does not seem to come with any documentation. What is up with this? It seems that Microsoft makes this great product, and then they did not produce any docs for how to use it. I just don’t get it.

—WB

Hey, Scripting Guy! Answer Hello WB,

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I are upstairs in my office. I am reading through the email sent to scripter@microsoft.com, and she is on her laptop hanging out on Twitter, Facebook, and whatever else she is up to these days. I am sipping a nice cup of Earl Gray tea with a bit of jasmine flower in it. I had some Earl Gray tea in Virginia Beach with jasmine in it, and it was really quite nice. So I looked around, and I found a place that sells organic, food-grade jasmine flowers, and I have been dying to try it out. It makes for a delightful afternoon tea.

Finding Windows PowerShell Help

WB, I must admit that my first expression was, “Dude, you have got to be kidding me.” But on reflection, I have received many emails asking about the “missing Windows PowerShell documentation.” If I think back about Windows NT 3.51, the product shipped with about a dozen floppy disks stuffed in a legal-sized envelope, about four pounds of books, and a heavy-duty cardboard box that was capable of protecting the most delicate crystal glasses during a transatlantic oceanic voyage.

Although I liked the books that shipped with Windows NT 3.51 (I especially thought the troubleshooting book was excellent), I do know people that never opened those books. In addition, by the time Windows NT Server 3.51 received its fifth service pack, the books were terribly out-of-date. One thing we have done to help with the problem of out-of-date documentation is to enable books on demand. The Scripting Manager, Dave Bishop, wrote an excellent blog called How to Print Documents from the TechNet Library and MSDN.

So, the thing to keep in mind is that product documentation is changing. These days, we have many sources of information. In all of these different areas, Microsoft employees, Microsoft MVPs, and other community leaders take an active part in creating and in developing content. A quick review of some of the different sources is listed here.

  1. Video
  2. Wiki
  3. Library pages
  4. Blogs
  5. Forums
  6. Quizzes
  7. Script Repositories
  8. Books
  9. On-the-box
  10. Facebook
  11. Twitter
  12. Windows PowerShell on the Microsoft Download Center
  13. Windows PowerShell on CodePlex

The image that follows illustrates the two Windows PowerShell quizzes that are available from the Learn PowerShell page on the TechNet Script Center. The quizzes correspond to two series of videos also available from the Learn PowerShell page. The quizzes be taken multiple times, and they represent a significant learning opportunity. Each question contains links to resources that explain the concept being tested.

Image of menu

Last year, the Windows PowerShell language specification released. This several-hundred page document provides detailed documentation about how the Windows PowerShell language works. It is a great resource for anyone who wants to develop a deep understanding of how Windows PowerShell really works. Also last year, an update to the on-the box core Windows PowerShell Help released in the Microsoft Download Center.

The online advantage

WB, one of the huge advantages to maintaining documentation online is that producing on-the-box documentation is a static process. A group of writers, editors, and tech reviewers develop the content, and it is printed and shipped. Most of the new online documentation represents an iterative process.

For example, I write a Hey, Scripting Guy! Blog based on an email I receive. The blog is edited and published. Next, someone adds a comment that I missed something. I can immediately go back into the blog and edit it to reflect the new comment. Often this happens the same day. If you have a question about something in my book, you email the publisher. The publisher emails me. I make a revision, and forward it back to the publisher. The publisher forwards it to the printer, and in the next printing, the change takes place. This process takes months, not days. And it is a process that occurs for only the most grievous of errors.

Working locally – searching globally

The Help that is returned from inside the Windows PowerShell console is basically out-of-date. It was out-of-date when the product shipped. This is because of the lag time between when the documentation must be completed and date when the final build of the product became ready to ship. Patches, updates, service packs, and other changes conspire to continue to make the Help out-of-date. Because of this situation (which is something we knew would happen), Windows PowerShell added the online switch. This permits access to the latest Help updates at all times. The technique of using the online switch is shown here where I obtain Help about the Get-Process cmdlet from the online Microsoft TechNet Library.

Get-Help Get-Process –Online

When you execute the command, the Windows PowerShell console returns control to you. Internet Explorer opens to the appropriate page in the TechNet Library with the latest documentation for the command for which you inquired—in this case the Get-Process cmdlet.

Image of menu

A way cool thing to do is to combine the online Help feature with the Build-a-Book feature that the Scripting Manager described in How to Print Documents from the TechNet Library and MSDN. This technique is shown in the image that follows.

Image of menu

WB, that is all there is to finding and using Windows PowerShell documentation. Join me tomorrow for a guest blog by Microsoft PFE, Jason Walker, as he talks about how to retrieve USB drive usage history. You DO NOT want to miss this most excellent blog—just sayin’.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Use PowerShell to Find the History of USB Flash Drive Usage

$
0
0

Summary: Microsoft premier field engineer, Jason Walker, shows how to use Windows PowerShell to get a history of USB drive usage.

Microsoft Scripting Guy, Ed Wilson, is here. I was talking to Jason Walker at the Charlotte Windows PowerShell User Group the other day. I asked him what cool things he was doing with Windows PowerShell, and he discussed a script he had recently written. I encouraged him to write a guest blog about the script. Today’s blog is a result of that conversation.

Photo of Jason Walker

Jason Walker is a premier field engineer (PFE) at Microsoft, and he supports customers in the public arena. His primary job is supporting Exchange Server, but he jumps at the opportunity to flex his Windows PowerShell muscles to resolve any issue that may come up. It does not matter if it is related to Exchange Server. Jason also actively participates in the Charlotte PowerShell Users Group.

Twitter: AutomationJason

USB ports are an awesome resource to any computer. They allow you quickly connect and use accessories such as mice, keyboards, and storage devices, just to name a few. However, USB storage devices are a popular vector bad guys use to get nefarious code onto a machine. With my customers being in the public sector, security is a top priority, and USB storage devices are not allowed. We could disable USB ports all together, but that would eliminate the ability to use other USB devices. Therefore, users are told to simply not use USB storage devices. The need came up to see if the users are playing by the rules and Windows PowerShell was the answer to that need.

“How do we find out if a USB storage device has been connect to a computer?” you ask. We look in the registry, of course. When a USB storage device is inserted into a machine, the USBSTOR key is created in the registry, and everything the operating system needs to know about that storage device is contained in that key. This is the complete path:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USBSTOR

Image of menu

When we expand the USBSTOR key, we see all the USB storage devices that have been used on the computer.

Image of menu

We see here that 15 different USB storage devices have been used on this machine. I know what you are thinking: “Whoever owns this machine is not very security conscience!”

By looking at the subkey names, we can get an idea about what kind of storage device was used, but the data isn’t easily readable. When we dig deeper, we find a FriendlyName property that is easily readable.

Image of menu

Now we can see that a “SanDisk U3 Cruzer Micro USB Device” was used on this machine.

To get this information, all we need Windows PowerShell to do is start from the USBSTOR key, recurse down two subkeys, and grab the FriendlyName property. There are a couple ways we can get this data. We could do it in one line like this:

Image of command output

But if we want to get this data from a remote machine, this will not work unless Windows PowerShell remoting is enabled—and it most cases it is not. I believe the method that will work in the most number of scenarios is the Microsoft.Win32.RegistryKey class.

Note   If you have Windows PowerShell remoting enabled, see last week’s Hey, Scripting Guy! blogs for a different approach to working with the registry. Saturday’s blog, Use PowerShell to Easily Modify Registry Property Values, contains links to the entire series.

I will explain the code that does all the heavy lifting.

First, starting at USBSTOR, we get all the subkeys:

$Reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey($Hive,$Computer)

$USBSTORKey = $Reg.OpenSubKey($Key)

$USBSTORSubKeys1 = $USBSTORKey.GetSubKeyNames()

Then we go through each subkey ($USBSTORSubkeys1) and collect the child subkeys. We store them in $SubKeys2 as shown here:

ForEach($SubKey1 in $USBSTORSubKeys1)

{          

  $Key2 = "SYSTEM\CurrentControlSet\Enum\USBSTOR\$SubKey1"

  $RegSubKey2 = $Reg.OpenSubKey($Key2)

  $SubkeyName2 = $RegSubKey2.GetSubKeyNames()      

  $Subkeys2  += "$Key2\$SubKeyName2"

  $RegSubKey2.Close()               

  }#end foreach SubKey1

Now we go through each Key in $Subkey2 and grab the FriendlyName property of the key. The value of the property and the name of the computer are stored in a custom object so we can easily send our data to Export-CSV or filter with Where-Object.

ForEach($Subkey2 in $Subkeys2)

 {         

  $USBKey     = $Reg.OpenSubKey($Subkey2)

  $USBDevice  = $USBKey.GetValue('FriendlyName')         

  $USBDevices += New-Object –TypeName PSObject -Property @{

            USBDevice = $USBDevice

            Computer = $Computer

             }

         $USBKey.Close()                                                                                       

  }#end foreach SubKey2

A few things to note are that the Remote Registry service must be running on the remote machine. I added a Write-Progress cmdlet because when I first wrote this script it ran against hundreds of machines. The Test-Connection cmdlet is not needed because the connection to the remote machine is inside the Try\Catch. However, during testing, it took 30+ seconds for the script to move to the next computer when trying to connect to a machine that was not online, so I added that as an option.

The script is written as a function, and it has comment-based Help with examples. Because it is written as a function, it will need to be dot-sourced. This is done by placing a dot and a space in front of the path to the script. When this is done, you can use the function just as you would a native Windows PowerShell cmdlet. Here is an example of how this is done:

Image of command output

Thanks, and I hope you can find this script useful when it’s time for you to flex your Windows PowerShell muscles.

~Jason

 

The complete script can be downloaded from the Script Repository. Thank you for a great guest blog. I have often seen this registry key, but never thought about using it in the way you have here. Awesome job.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Scripting Wife Creates Her Ideal TechEd 2012 Schedule

$
0
0

Summary: The Scripting Wife plays WhatIf and creates her ideal Microsoft TechEd 2012 schedule.

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife likes to use WhatIf in her daily musings. She has also been hearing from a lot of people who are going to Microsoft TechEd for the first time. Last year was her first TechEd experience, and she hopes to improve her takeaways this year. When asked what her schedule would look like if she was not working at the Scripting Guys booth, she sat down with the catalog and came up with the following schedule. I will warn you that Teresa is very flexible, so although this would be her proposed schedule on paper, it by no means indicates what will or would actually occur.

For example, she may have chosen a session to attend based on the content, but then she may meet a really interesting person after we arrive who is presenting at the same time as a session she had already written down as attending. The next thing I know, I may be getting an email from her on my Windows 7 smartphone that says, “My plans have changed and I am going to attend a different session than is on my schedule. I met this really cool person and I want to go to his session.” In truth, the chances are good that if you want to talk to the Scripting Wife, come by the Scripting Guys booth during nearly any hour that the Connect Zone is open, and she will probably be there. She loves talking to people who want to learn Windows PowerShell, but do not know where or how to begin.

Now without further ado, here is the Scripting Wife’s schedule for Microsoft TechEd 2012.

Image of TechEd logo

Hello everyone! Scripting Wife is here. I need to say a couple of things so you can follow what I have written and chosen. By the way do you know how hard it is to choose between Don Jones and Mark Minasi? Both are dear friends and both are excellent speakers.

First, I am going to list the three items that I will work on while I am not attending a session. There is one demo in The Learning Center (TLC) and there are two hands-on labs.

Demo in S. Hall A:

Windows Server 2012 Server Manager and PowerShell | WSV07-TLC

Whatever the size of your organization, you want to maximize IT operational cost efficiency. Windows Server 2012 offers excellent total cost of ownership as an integrated platform with comprehensive, multiserver manageability. It delivers capabilities to manage many servers and the devices connecting them—whether they are physical or virtual, and whether they are on-premises or off. Specifically, Windows Server 2012 provides new multi-machine management capabilities, automation, improved compliance with industry management technology standards, and experiences which are unified across physical and virtual platforms. Visit the Windows Server 2012 Management booth to hear from experts about all the great new features.

Lab in S. Hall B:

What's New in Windows PowerShell 3.0 | WSV11-HOL

This lab gets you up and running with the all-new Server Manager in Windows Server 2012. Topics covered include: deploying roles and features to a remote server, configuring a role on a remote server, monitoring remote servers, and troubleshooting a remote server. You will leave this lab with a solid grasp of how to use the new Server Manager to quickly and easily perform common server management tasks, local and remote.

Lab in S. Hall B:

Introduction to Windows PowerShell Fundamentals | 3.0 WSV12-HOL

This lab gets you up and running with Windows PowerShell, Microsoft's latest shell environment and scripting language. Topics covered include: file system navigation, help and discovery features, PowerShell scripts, working with WMI, working with the system registry, and working with Active Directory. You will leave this lab with a solid grasp of how to use PowerShell to quickly and easily perform common IT management tasks, and have a solid foundation to learn advanced PowerShell topics.

And now for my session schedule...

Monday, June 11

9:00 AM - 10:30 AM

Keynote Session

11:30 AM - 1:15 PM

Lunch

1:15 PM - 2:30 PM

The 12 Reasons to Love Microsoft SQL Server 2012 | DBI202

Speakers: Dandy Weyn, Thomas LaRock

In this demo-only session, discover 12 good reasons to love SQL Server 2012. Learn about AlwaysOn, ColumnStore Indexing, Data Quality Services, Transact-SQL Enhancements, Power View, SQL Server Data Tools, and so much more in a very demo-driven session. By the end of the session you will understand the key features and improvements in SQL Server 2012, and how they work together.

3:00 PM - 4:15 PM

PowerShell Remoting in Depth | WCL403

Speaker: Don Jones

Remoting is a foundation feature in Windows PowerShell, and is poised to become one of the most      important protocols on your network, especially for remote client management and support. Do you know how it works? Can your troubleshoot it? Can you configure it in a variety of scenarios to meet your organization's needs for security and operations? Don Jones, PowerShell author and MVP, walks you through the nitty-gritty details of Remoting, showing examples for a variety of scenarios and covering its troubleshooting features in great detail. 

4:45 PM - 6:00 PM

Windows PowerShell Crash Course | WSV321

Speakers: Don Jones, Jeffrey Snover

Windows PowerShell 3.0 is here, and it is delivering on Microsoft's promise to make nearly everything in Windows manageable from the command-line. Are you finally going to learn the shell, or learn to say, "Would you like fries with that?" instead? Join PowerShell author, columnist, trainer, and MVP Don Jones (one of the world's most well-known PowerShell experts) in a crash course that shows you how to use the shell's key features. No scripting experience needed—you'll use the shell as it is meant to be used to accomplish real administrative tasks with just a few commands. Also, learn how the shell can teach you how to use itself, setting you up for success with the new wave of Microsoft and third-party enterprise products.

6:00 PM - 9:00 PM

TechExpo Welcome Reception

Tuesday June 12

10:15 AM - 11:30 AM

Inside Windows Server 2012 Multi-Server Management Capabilities | WSV306

Speakers: Erin Chapple, Jeffrey Snover

Windows Server 2012 will offer excellent total cost of ownership as an integrated platform with comprehensive, multicomputer manageability. Two areas in which Windows Server 2012 improves multicomputer management are Server Manager and Windows PowerShell 3.0. Server Manager in Windows Server 2012 helps you efficiently deploy and manage roles and features on the local server, on remote servers, and on both online and offline virtual hard disks. It also provides a multiserver experience where you can centralize your Windows Server management in a single view, and streamlined your server configuration and deployment from the same window. Windows PowerShell 3.0 provides an extensive platform to help you manage server roles and automate management tasks. With access to over 2300 cmdlets (a tenfold increase from the previous version), Windows PowerShell 3.0 offers comprehensive management across your datacenter. This session overviews these subjects in detail and prepares you for enhancing your management capability.

11:30 AM - 1:30 PM

Lunch

1:30 PM - 2:45 PM

Legal Structures of User Groups | BOF05-ITP

Room S329

Should your group incorporate or apply for non-profit status? What are the benefits and pitfalls of creating legal structures for IT Professional groups? Do you charge membership fees to the members to pay the legal obligations?

3:15 PM - 4:30 PM

Standards Support and Interoperability in Windows Server 2012: Storage, Networking, and Management |  WSV308

Speakers: Gene Chellis, Jeffrey Snover, See-Mong Tan, Wojtek Kozaczynski

Windows has always implemented formal and industry standards but the Windows Server 2012 mission to be a Cloud OS required us to take this to a new level with investments in storage, networking and management standards. For example, in Windows Server 2012 the WSMAN standard is now the primary management protocol, with DCOM provided for backwards compatibility. Clearly this is not your father’s Windows Server. This 200-level session covers formal “de jure” standards including management (such as SMI-S, WSMAN and CIM), networking (such as IPv6, IPSec, DCB, DCTCP, NVGRE, ECMA Power standards and RDMA), and storage (such as NFS and iSCSI). We describe the technology, how Windows supports it, what benefits it brings, and what you need to change/do in order to get those benefits. We also discuss “de facto” standards such as VSS and SMB and describe how Microsoft enables and promotes interoperability using conferences, protocol documentation, plugfests and more to make Windows Server 2012 the most interoperable operating system on the planet. IT pros should come learn how Windows Server 2012 simplifies the tasks of architecting and running systems and how you should update your equipment purchasing guidelines. Developers and Partners should come to learn about new opportunities and preferred mechanisms to interoperate with Windows Server 2012.

5:00 PM - 6:15 PM

Extending Applications to Everywhere! Your Guide to Securing RDS RemoteApps for the Internet | WSV311

Speaker: Greg Shields

Your job as IT administrator is all about applications and data. You need to protect your users' data, and you need to ensure access to it via applications. But today's workforce requires us to make our applications available from everywhere. Creating your own Internet-based cloud infrastructure for corporate applications is quickly becoming a need for every environment, both large and small. You can create this today for very little cost using Microsoft Remote Desktop Services. RDS Guru and Microsoft MVP Greg Shields has been working with Terminal Services since its introduction in Windows NT 4.0 Terminal Services Edition, and in this deep dive session he presents the step-by-step approach to building a cloud-based remote applications infrastructure with enough industrial-grade security that even the most secure of networks will allow it. RDS has for too long been relegated to the sidelines. Attend this session and make secure, scalable, and inexpensive cloud applications a reality for your company.

6:15 PM - 9:00 PM

Community Night. There will be a Scripting Guys area!

9:00 PM - 1:00 AM

Jam Sessions

Wednesday June 13

8:30 AM - 9:45 AM

Group Policy Reporting and Analysis with Windows PowerShell | WSV415

Speakers: Jeffery Hicks, Jeremy Moskowitz

In this session led by PowerShell MVP Jeffery Hicks and Group Policy MVP Jeremy Moskowitz, we discuss techniques for analyzing Group Policy objects to identify potential problems. We look at ways to identify Group Policy settings using PowerShell scripts and third-party tools.

11:30 AM - 1:30 PM

Lunch

1:30 PM - 2:45 PM

Turn PowerShell Commands into Reusable CLI and GUI Tools | WCL404

Speakers: Don Jones

Say you have written an incredible PowerShell command or script. How do you leverage that as a tool across your entire organization? Learn to turn those commands and scripts into tools that can be safely used by your less-technical colleagues, or even end users! PowerShell MVP and author Don Jones demonstrates a variety of approaches, including cmdlet-like command-line tools all the way up to fully-distributable GUI tools that you can build yourself!

3:15 PM - 4:30 PM

Windows PowerShell Best Practices | BOF11-ITP

Led by Windows PowerShell MVP, Don Jones, and Microsoft Scripting, Guy Ed Wilson

All of Microsoft's current operating systems and most of their server-based products have deep PowerShell integration, and that trend will continue. Same goes for the hardware and software vendors out there, large and small. You need to be on this train! You need to script! How do you know if you are doing it right? What does that even mean? Can you save time by using tried and tested techniques?

5:00 PM - 6:15 PM

Advanced Automation Using Windows PowerShell 3.0 | WSV414

Speakers: Hemant Mahawar, Travis Jones

This session showcases the improvements to Windows PowerShell introduced in Windows Server 2012. Demos for this talk include how IT administrators can create cmdlet functionality without the need for programming skills, PowerShell Remoting, Jobs, Modules, Debugging, and Constrained Endpoints, among others.

Thursday, June 14

8:30 AM - 9:45 AM

How to Tell Your Manager You Need Quotas on Your Mailboxes | EXL203

Speaker: Bhargav Shukla

As systems offer bigger and bigger mailboxes and Microsoft has published whitepapers on large mailbox vision, it is important to understand the infrastructure to support such designs, what your company desires, and what the cost and operational impacts are. This session provides intelligent discussion about factors involved and cost benefits, to help you drive your infrastructure towards better manageability and stability.

10:15 AM - 11:30 AM

Application Monitoring with Microsoft System Center Operations Manager 2012 | MGT302

Speakers: Aakash Mandhar, Daniele Muscetta

In this session we provide a summary of the application monitoring capabilities enabled through System Center 2012 Operations Manager against Microsoft .NET and Java applications, covering the platforms and supported configurations. See how System Center 2012 Operations Manager can provide critical availability, performance, and reliability information without requiring any custom instrumentation to the code or custom management packs.

11:30 AM - 1:00 PM

Lunch

7:30 PM - 12:00 AM

Closing Party

That is my WhatIf schedule, and I think it has a nice balance to it. Hopefully, you will find it useful. Keep in mind, Microsoft TechEd 2012 is a huge event, and the Orlando Expo Center (according to the Scripting Guy) is like three miles long. So you should definitely bring comfortable walking shoes and also wear clothes that are comfortable. I look forward to seeing you if you get to come to TechEd 2012 in Orlando. There are still some slots available, so there is still time to register.

~Scripting Wife

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Scripting Guy Reveal Their TechEd 2012 Schedule

$
0
0

Summary: The Scripting Guy reveals his Microsoft TechEd 2012 schedule and all the Windows PowerShell goodness thereof.

Microsoft Scripting Guy, Ed Wilson, is here. Yesterday we played WhatIf with the Scripting Wife about what sessions she would choose to attend at TechEd 2012 if she was not involved in meeting and greeting people at the Scripting Guys booth. Today, I am going to share my real schedule. Hope to see you at some of these sessions and events. Please don’t be shy come on by and say, “Hi.”

Pre TechEd 2012:

Friday night, June 8: I will be at the speaker’s dinner for SQLSaturday #132 in Pensacola, FL.

Saturday, June 9: I will be speaking at and attending other sessions at SQLSaturday #132 in Pensacola, FL.

At Microsoft TechEd 2012 in Orlando

Here is what I plan to be doing during TechEd 2012…

Sunday, June 10

Morning Registration

11:00 AM

Sign off on the Scripting Guys booth

12:30 PM – 5:00 PM

INETA Community Leadership Summit

8:00 PM – 11:00 PM 

The Krewe Meet N Greet

Monday, June 11

7:00 AM - 8:30 AM

Breakfast

9:00 AM - 10:30 AM

Keynote Session

10:30 AM - 12:00 PM

Scripting Guys booth in the Connect zone

12:00 PM – 1:00 PM

Lunch

3:00 PM - 4:15 PM

PowerShell Remoting in Depth | WCL403

4:45 PM - 6:00 PM

The Network Files, Case #53: Diagnosing Diseases of DNS | WSV313

6:00 PM – 9:00 PM

Scripting Guys booth with the following guest schedule:

6:00 PM

Special guest at the Scripting Guys booth will be Don Jones. Be sure to come by, say hello, and get an autograph.

7:00 PM

Special guests, Rohn Edwards and Lido Paglia, winners from the 2012 Scripting Games.

7:30 PM

Blain Barton will be at the Scripting Guys booth to interview Rohn and Lido about their experiences in the 2012 Scripting Games for IT Time-TechNet Radio.

Tuesday, June 12 

10:00 AM

Blain Barton interviews me at the O’Reilly booth just prior to a book signing for IT Time-TechNet Radio.

10:30 AM - 11:00 AM

Book signing at the O’Reilly booth

11:30 AM - 12:30 PM

Lunch

12:30 PM – 5:00 PM

Scripting Guys booth

6:15-9PM

Scripting Guys area at Community Night in North Hall B

Wednesday, June 13

8:30 AM - 9:45 AM

Group Policy Reporting and Analysis with Windows PowerShell | WSV415

10:30 AM – 3:00 PM

Scripting Guys booth                

10:30 AM

Special guest at the Scripting Guys booth, Jeffery Hicks, so come by and meet ‘n greet with Jeffery.

3:15 PM - 4:30 PM

Windows PowerShell Best Practices | BOF11-ITP  

I am cohosting this session with Don Jones.

5:00 PM - 6:15 PM

Advanced Automation Using Windows PowerShell 3.0 | WSV414

Thursday, June 14

10:30 AM - 2:00 PM 

Scripting Guys booth

7:30 PM - 12:00 AM

Closing Party

Friday, June 15

8:00 AM – 12:00 PM

Build a Solution on Windows PowerShell 3.0 at the a Rosen Center

And to conclude the week with more Windows PowerShell goodness…

Off to Jacksonville, FL for the Jacksonville IT Pro Camp speaker’s dinner.

Saturday, June 16 

8:00 AM – 5:00 PM  

I will attend and speak at the Jacksonville IT Pro Camp.

Some special notes…

The Scripting Guys booth will have two people at all times, and about half the time, there will be three people there. Besides myself and Teresa (The Scripting Wife), Daniel Cruz will be joining us to answer your questions and to interact with you. A big THANK YOU to Dan for volunteering to help out this year.

I will update this page with information about more special guests that will be showing up at the booth. So bookmark this page so you can keep track of the schedule during the week of TechEd 2012 in Orlando.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Understanding the Six PowerShell Profiles

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses the six different Windows PowerShell profiles, and when to use each.

Hey, Scripting Guy! Question Hey, Scripting Guy! Dude, I have been reading some of the posts in your most excellent blog; and first of all, I want to say I think you are great. Now for the question: I do not get the Windows PowerShell profile. I mean I get it, but not really. Here is part of my problem. I put some things in the profile, and then I go back and they are not there. Like, what is up with that? I hope you can help me. By the way, I am, like, totally looking forward to seeing you and the Scripting Wife at TechEd 2012 in Orlando. You will know me, because I sort of look like Urkel, and I always wear plaid shirts (but I don’t wear suspenders).

—BB

Hey, Scripting Guy! Answer Hello BB,

Microsoft Scripting Guy, Ed Wilson, is here. Last week was an absolutely great week. The Scripting Wife and I had dinner one night with Rich from the NYC Windows PowerShell Users Group (he is also a moderator for the Scripting Guys forum and writer of a couple of guest blogs). Rich was kind enough to bring me some Gunpower green tea, and I am sipping some right now. It is wonderful with a half teaspoon of organic lavender added to the pot. We also had the PowerScripting Podcast with the two winners of the 2012 Scripting Games and Jeffrey Snover. That conversation was fun and informative. I also enjoy talking to Jeffrey, and I look forward to sitting-in on at least one of his sessions at Microsoft TechEd 2012.

Six, count ‘em, six different PowerShell profiles

BB, there is no doubt that you are a bit confused with Windows PowerShell profiles. There are, in fact, six different profiles. The Windows PowerShell console and the Windows PowerShell ISE have their own profiles. In addition, there are profiles for the current user and profiles for all users. The table that follows lists the six profiles and their associated locations.

Description

Path

Current User, Current Host - console

$Home\[My ]Documents\WindowsPowerShell\Profile.ps1

Current User, All Hosts   

$Home\[My ]Documents\Profile.ps1

All Users, Current Host - console   

$PsHome\Microsoft.PowerShell_profile.ps1

All Users, All Hosts      

$PsHome\Profile.ps1

Current user, Current Host - ISE

$Home\[My ]Documents\WindowsPowerShell\Microsoft.P owerShellISE_profile.ps1

 All users, Current Host - ISE  

$PsHome\Microsoft.PowerShellISE_profile.ps1

 

Understanding the six Windows PowerShell profiles

The first thing to do to understand the six Windows PowerShell profiles is to keep in mind that they move. They change (sort of like the staircases at Hogwarts). As long as you realize that they are a moving target, you will be fine. In most cases, when talking about the Windows PowerShell profile, people are referring to the current user, current host profile. In fact, if no one qualifies the Windows PowerShell profile with its associated scope or description, it is safe to assume that they are talking about the Current User, Current Host profile.

Note   A Windows PowerShell profile (any one of the six) is simply a Windows PowerShell script. It has a special name, and it resides in a special place, but it is simply a script. In this regard, it is sort of like the old-fashioned autoexec.bat batch file. Because the Windows PowerShell profile is a Windows PowerShell script, you must enable the Script Execution policy prior to configuring and using a Windows PowerShell profile. For information about the Script Execution policy refer to this collection of Hey, Scripting Guy! Blogs.

Examining the $profile variable

When you query the $profile automatic variable, it returns the path to the Current User, Current Host profile. This makes sense, and it is a great way to easily access the path to the profile. The following script illustrates this technique from within the Windows PowerShell console.

PS C:\> $profile

C:\Users\ed.IAMMRED\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

Inside the Windows PowerShell ISE, when I query the $profile automatic variable, I receive the output that is shown here.

PS C:\Users\ed.IAMMRED> $profile

C:\Users\ed.IAMMRED\Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1

To save you a bit of analyzing…

The difference between the Windows PowerShell console Current User, Current Host profile path and the Windows PowerShell ISE Current User, Current Host profile path is three letters: ISE.

BB, these three letters are probably causing you problems. More than likely, you are setting something in your Windows PowerShell console profile, and it is not available inside the Windows PowerShell ISE.

Unraveling the profiles

You can pipe the $profile variable to the Get-Member cmdlet and see additional properties that exist on the $profile variable. This technique is shown here.

PS C:\> $PROFILE | Get-Member -MemberType noteproperty | select name

 

Name

----

AllUsersAllHosts

AllUsersCurrentHost

CurrentUserAllHosts

CurrentUserCurrentHost

If you are accessing the $profile variable from within the Windows PowerShell console, the AllUsersCurrentHost and the CurrentUserCurrentHost note properties refer to the Windows PowerShell console. If you access the $profile variable from within the Windows PowerShell ISE, the AllUsersCurrentHost and the CurrentUserCurrentHost note properties refer to the Windows PowerShell ISE profiles.

Using the $profile variable to refer to more than the current host

When you reference the $profile variable, by default it refers to the Current User, Current Host profile. If you pipe the variable to the Format-List cmdlet, it still refers to the Current User, Current Host profile. This technique is shown here.

PS C:\> $PROFILE | Format-List *

C:\Users\ed.IAMMRED\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

This leads to a bit of confusion, especially because the Get-Member cmdlet reveals the existence of multiple profiles and multiple note properties. The way to see all of the profiles for the current host, is to use the force parameter. It reveals the hidden properties. The command illustrating this technique is shown here.

$PROFILE | Format-List * -Force

The command and the associated output from the command are shown in the image that follows.

Image of command output

It is possible to directly access each of these specific properties just like you would access any other property—via dotted notation. This technique is shown here.

$PROFILE.CurrentUserAllHosts

The path to each of the four profiles for the Windows PowerShell console are shown in the image that follows.

Image of command output

Determine if a specific profile exists

To determine if a specific profile exists, use the Test-Path cmdlet and the appropriate flavor of the $profile variable. For example, to determine if a Current User, Current Host profile exists you can use the $profile variable with no modifier, or you can use the CurrentUserCurrentHost note property. The following example illustrates both of these.

PS C:\> test-path $PROFILE

True

PS C:\> test-path $PROFILE.CurrentUserCurrentHost

True

PS C:\>

In the same manner, the other three profiles that apply to the current host (in this example, I am using the Windows PowerShell console) are determined to not exist. This is shown in the code that follows.

PS C:\> test-path $PROFILE.AllUsersAllHosts

False

PS C:\> test-path $PROFILE.AllUsersCurrentHost

False

PS C:\> test-path $PROFILE.CurrentUserAllHosts

False

PS C:\>

Creating a new profile

To create a new profile for current user all hosts, use the CurrentUserAllHosts property of the $profile automatic variable, and the New-Item cmdlet. This technique is shown here.

PS C:\> new-item $PROFILE.CurrentUserAllHosts -ItemType file -Force

    Directory: C:\Users\ed.IAMMRED\Documents\WindowsPowerShell

 

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

-a---         5/17/2012   2:59 PM          0 profile.ps1

To open the profile for editing, use the ise alias as shown here.

ise $PROFILE.CurrentUserAllHosts

When you are finished editing the profile, save it, close the Windows PowerShell console, reopen the Windows PowerShell console, and test that your changes work properly.

BB, that is all there is to using the $profile variable to discover different Windows PowerShell profiles. Windows PowerShell Profile Week will continue tomorrow when I will talk about editing and testing a Windows PowerShell profile.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Deciding Between One or Multiple PowerShell Profiles

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses some of the decision points between using one or multiple Windows PowerShell profiles.

Hey, Scripting Guy! Question Hey, Scripting Guy! OK, so I understand that there are different types of Windows PowerShell profiles (I liked your blog yesterday by the way). But you failed on one major point: How do I know which profile to use? For example, suppose it was different types of tea—you have an Earl Grey, an English Breakfast, and a generic green tea. When would you drink which tea? I hope this makes my question clear. Thanks.

—BH

Hey, Scripting Guy! Answer Hello BH,

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I are getting ready for Atlanta TechStravaganza 2012. It happens on Friday June 1, 2012, and there are just a few tickets remaining. This event is going to be held at the Microsoft Office in Atlanta. It is free, and it features a number of awesome speakers (including me). There is, in fact, an entire Windows PowerShell track (as tracks for Windows Server and System Center). This high profile, high impact event will be awesome. So how did the Scripting Wife and I get ready for the Atlanta TechStravaganza? We were on the website of my favorite computer store making a shopping list because no trip to Atlanta is complete without a trip to my favorite computer store. My list is already running two pages long, so maybe I need to implement a dedup routine.

Note   This is the second in a series of four blogs that discuss the Windows PowerShell profile. The first blog, Understanding the Six PowerShell Profiles, appeared on Monday. For additional information about the Windows PowerShell profile, refer to this collection of blogs on the Hey, Scripting Guy! Blog.

Design considerations for profiles

The first thing to do when deciding how to implement your Windows PowerShell profile is to analyze the way in which you use Windows PowerShell. For example, if you confine yourself to running a few Windows PowerShell scripts from within the Windows PowerShell ISE, there is no reason to worry about a Windows PowerShell console profile. If you use a different Windows PowerShell scripting environment than the Windows PowerShell ISE, but you also work interactively from the Windows PowerShell console, you may need to add stuff to the other scripting environment’s profile (assuming it has one) in addition to the Windows PowerShell console profile. If you work extensively in the scripting environment and the Windows PowerShell console, and you find yourself desiring certain modifications to both environments, well…that leads to a different scenario.

There are three names used for the Windows PowerShell profiles. The names appear in the table that follows along with the profile usage.

Profile Name

Profile Usage

Microsoft.PowerShell_profile.ps1

Refers to profiles (current user or all users) for the Windows PowerShell console.

profile.ps1

Refers to profiles (current user or all users) for all Windows PowerShell hosts.

Microsoft.PowerShellISE_profile.ps1

Refers to profiles (current user or all users) for the Windows PowerShell ISE.

The distinction between the Windows PowerShell ISE profiles and the Windows PowerShell console profiles is the letters ISE in the name of the Windows PowerShell ISE profiles. The location of the Windows PowerShell profile determines the scoping (whether the profile applies to the current user or to all users). All user profiles (any one of the three profiles detailed in the previous table) appear in the Windows\system32\WindowsPowerShell\v1.0 directory; a location that is referenced by the $pshome variable. The following script illustrates using the $pshome variable to obtain this folder.

PS C:\Users\ed.IAMMRED> $PSHOME

C:\Windows\System32\WindowsPowerShell\v1.0

The folder that contains the three current user Windows PowerShell profiles is the WindowsPowerShell folder in the user’s mydocuments special folder. The location of the user’s mydocuments special folder is obtained by using the GetFolderPath method from the System.Environment .NET Framework class. This technique is shown here.

PS C:\> [environment]::getfolderpath("mydocuments")

C:\Users\ed.IAMMRED\Documents

The table that follows details a variety of use-case scenarios, and it points to the profile to use for specific purposes. 

Windows PowerShell Use

Location and Profile Name

Near exclusive Windows PowerShell console work as a non-administrative user

MyDocuments

Microsoft.PowerShell_profile.ps1

Near exclusive Windows PowerShell console work as an administrative user

$PSHome

Microsoft.PowerShell_profile.ps1

Near exclusive Windows PowerShell ISE work as a non- administrative user

MyDocuments

Microsoft.PowerShellISE_profile.ps1

Near exclusive Windows PowerShell ISE work as an administrative user

$PSHome

Microsoft.PowerShellISE_profile.ps1

Balanced Windows PowerShell work as non- administrative user

MyDocuments

profile.ps1

Balanced Windows PowerShell work as an administrative user

$psHome

profile.ps1

Note   Depending on how you perform administrative work, you may decide that you want to use a current user type of profile. This would be because you log on with a specific account to perform administrative work. If your work requires that you log on with a number of different user accounts, it makes sense to use an all users profile.

Using more than one profile

Many Windows PowerShell users end up using more than one Windows PowerShell profile. This may not be intentional, but that is how it winds up. What happens is that they begin by creating a current user, current host profile via the Windows PowerShell $profile variable. (For more information about this, refer to yesterday’s Hey, Scripting Guy! Blog, Understanding the Six PowerShell Profiles.) After adding a number of great items in the Windows PowerShell profile, the user decides that it would nice to have the same features in the Windows PowerShell console—or the Windows PowerShell ISE—whichever was not the original source. Therefore, after creating an additional profile, the user soon realizes there is a duplication of work.

Depending on how much you add to your Windows PowerShell profile, you may be perfectly fine with having multiple Windows PowerShell profiles. If your profile does not have very many items in it, using one Windows PowerShell profile for the Windows PowerShell console and another profile for the Windows PowerShell ISE may be a perfectly acceptable solution. Simplicity makes this approach work. For example, certain commands, such as the Start-Transcript cmdlet, do not work in the Windows PowerShell ISE. In addition, certain commands, such as those requiring STA, do not work by default in the Windows PowerShell console. By creating multiple $profile profiles (current user, current host), and only editing them from the appropriate environment, much complexity leaves the profile creation process.

However, it will not become too long before duplication leads to inconsistency, and that leads to frustration, and finally to a desire for correction and solution. A better approach is to plan for multiple environments from the beginning.

Advantages of using more than one profile

  • Simple
  • $profile always refers to the correct profile
  • Removes concern about incompatible commands

When to use more than one profile

  • With a simple profile
  • When you do not have administrator or non-elevated user requirements

Disadvantages of using more than one profile

  • Duplication of effort
  • Inconsistencies between profiles (for variables, functions, PSDrives, and aliases)
  • Maintenance due to the number of potential profiles

Using one profile

If you need to customize the Windows PowerShell console and the Windows PowerShell ISE (or other Windows PowerShell host), and you need to log on with multiple credentials, your need for Windows PowerShell profiles increases exponentially. Attempting to keep a number of different Windows PowerShell profiles in sync quickly becomes a maintenance nightmare. This is especially true if you are prone to making quick additions to your Windows PowerShell profile when you see a particular need.

In addition to having a large number of different profiles, it is also possible for a Windows PowerShell profile to grow to inordinate proportions—especially when you begin to add many nicely crafted Windows PowerShell functions and helper functions. One solution to the problem of profile bloat (in fact, the best solution) is to use modules. My Windows PowerShell ISE profile uses four modules. The profile itself consists of the lines that load the modules.

Note   This approach of containing functionality for a profile inside modules, and then loading the modules from the profile file, is presented in the Hey, Scripting Guy! Blog, Create a Really Cool PowerShell ISE Profile.

Advantages of using one profile

  • Less work
  • Easier to keep different profiles in sync
  • Consistency from different Windows PowerShell environments
  • Portability (the profile can more easily travel to different machines)

When to use one profile

  • With more complex profiles
  • When your work requires multiple user accounts or multiple Windows PowerShell hosts
  • If your work takes you to different computers or virtual machines

Disadvantages of using one profile

  • More complex to setup
  • Requires more planning
  • $profile does not point to correct location

BH, that is all there is to deciding about how to work with Windows PowerShell profiles. Windows PowerShell Profile Week will continue tomorrow when I will talk about setting up a single profile environment.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use a Central File to Simplify Your PowerShell Profile

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use a central file to simplify configuring your Windows PowerShell profile.

Hey, Scripting Guy! Question Hey, Scripting Guy! So can you tell me without all the chatter, what is the best way to create a single Windows PowerShell profile? I don’t want to hear about a whole lot of other junk because I am rather busy. So please, as one American police person on television used to say, “Just the facts, just the facts.”

—ZQ

Hey, Scripting Guy! Answer Hello ZQ,

Microsoft Scripting Guy, Ed Wilson, is here. Today is the third day in a row that we have had the windows open and the air conditioner turned off. The cool breeze and sounds of birds playing in the lawn are a welcome relief to the incessant thrum of electric motors. I just fixed a pot of Gun Powder Green tea with a half spoonful of organic lavender and a cinnamon stick for my midmorning tea. I also opened the cookie jar, and retrieved the next to last Anzac biscuit. I gave the Scripting Wife the last Anzac biscuit in honor of her appearance tonight on the PowerScripting Podcast. She and Hal’s wife will be the guests on the show tonight, and it is sure to be a hoot. We did pretty well at making the Anzac biscuits last as long as possible. We cannot find Anzac biscuits in Charlotte, so a friend who is a Windows PowerShell MVP in Australia shuttled them to us via another Windows PowerShell MVP from Charlotte. We retrieved the elusive biscuits in the parking lot of the Microsoft office in Charlotte one night following a Windows PowerShell User Group meeting.

Note   This is the third in a series of four blogs that discuss the Windows PowerShell profile. The first blog, Understanding the Six PowerShell Profiles, appeared on Monday. The second blog, Deciding Between One or Multiple PowerShell Profiles debuted on Tuesday. For additional information about the Windows PowerShell profile, refer to  this collection of blogs on the Hey, Scripting Guy! Blog.

One Windows PowerShell profile—but which one?

One way to use a single Windows PowerShell profile is to put everything into the all users, all hosts profile. I know some companies that create a standard Windows PowerShell profile for everyone in the company, and they use the all users, all hosts profile as a means of standardizing their Windows PowerShell environment. The changes go in during the image build process; and therefore, the profile is available to machines built from that image.

Advantages of using the all users, all hosts profile

  • Simplicity. One location for everything, especially when added changes during the build process.
  • One file affects all Windows PowerShell users and all Windows PowerShell hosts.
  • No conflict between Admin users and non-admin users, both types of users use the same profile.
  • $profile.AllUsersAllHosts always points to the correct file.
  • Great for central management—one file for all users of a machine.

When to use the all users, all hosts profile

  • For a personal profile when duties require elevation and non-elevation of permissions across multiple Windows PowerShell hosts.
  • As part of a standard image build to deploy static functionality to numerous machines and users.

Disadvantages of using the all users, all hosts profile

  • You must have administrator rights on the current machine to make changes to the file.
  • No distinction between different hosts—some commands do not work in ISE, and others do not work in the Windows PowerShell console.
  • No distinction between administrator users and non-admin users. Non-admin users cannot run some commands.
  • The files are distributed among potentially thousands of different machines. To make one change to the profile, the file must copy to all machines that are using that profile. This can be a major issue for computers such as laptops that connect only occasionally to the network. It is also a problem when attempting to use a shutdown script on a Windows 8 device (because Windows 8 devices do not perform a true shutdown).

Use your own file

Because the Windows PowerShell profile is a Windows PowerShell script (with the added benefit of having a special name and residing in a special location), it means that anything that can be accomplished in a Windows PowerShell script can be accomplished in a Windows PowerShell profile. A much better approach to dealing with Windows PowerShell profiles is to keep the profile itself as simple as possible. But bring in the functionality you require via other means. One way to do this is to add the profile information you require to a file. Store that file in a central location, and then dot-source it to the profile.

Just the steps: Use a central profile script

  1. Create a Windows PowerShell script containing the profile information that you require. Include the aliases, variables, functions, Windows PowerShell drives, and commands to execute on start up of Windows PowerShell.
  2. In the Windows PowerShell profile script, dot-source the central profile file. The following command (placed in the $profile script) brings in functionality that is stored in a Windows PowerShell script named myprofile.ps1 that resides in a shared folder named c:\fso:

. c:\fso\myprofile.ps1

The advantages of using a central Windows PowerShell script to store your profile information is that only one location requires updating when you add additional functionally to your profile. In addition, if folder permissions permit, the central Windows PowerShell script becomes available to any user for any host on the local machine. If you store this central Windows PowerShell script on a network file share, you need to update only one file for the entire network.

Advantages of using a central script for a PowerShell profile

  • One place to modify for all users and all hosts having access to the file.
  • Easy to keep functionality synchronized among all Windows PowerShell hosts and users.
  • Makes it possible to have one profile for entire network.

When to use a central script for a PowerShell profile

  • Provide basic functionality among multiple hosts and multiple users.
  • Use for single user who wants to duplicate capabilities between Windows PowerShell hosts.
  • Use to provide a single profile for networked computers via a file share.

Disadvantages of using a central script for a PowerShell profile

  • More complicated due to multiple files.
  • No access to the central file means no profile for the machine.
  • It is possible that non-role specific commands become available to users.
  • More complicated to filter out specific commands for specific hosts.
  • One central script becomes very complicated to maintain when it grows to hundreds of lines.

ZQ, that is all there is to using a central file for your Windows PowerShell profile. Windows PowerShell Profile Week will continue tomorrow when I will talk about using a Windows PowerShell module for a profile.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use a Module to Simplify Your PowerShell Profile

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, teaches how to use a Windows PowerShell module to simplify your profile.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a problem and I hope you can provide some answers. My Windows PowerShell profile is, I guess, a bit excessive. I have a function that customizes my Windows PowerShell command prompt. I created a number of custom aliases and a couple of Windows PowerShell drives. So far, so good. The problem is all the customized functions that I wrote and use on a regular basis. I have them all pasted into my profile as well. I am not saying I wrote all this stuff. Heck no! For example, the Test-IsAdministrator and some of the other functions I use, I stole from you. The problem is that my profile is now more than 2,000 lines long. I have it grouped according to the way you said: aliases, variables, PS Drives, functions, and commands. But still it is a problem when I need to add stuff to it. I loved your blogs this week, because one problem I have is keeping things straight between the Windows PowerShell ISE and the Windows PowerShell console; but still, I need more help. Got any more good ideas?

—VG

Hey, Scripting Guy! Answer Hello VG,

Microsoft Scripting Guy, Ed Wilson, is here. Today, things are a bit groggy for me. I was up late last night listening to the Scripting Wife and Hal’s wife on the PowerScripting Podcast last night. Things always go a bit long there with people hanging around in the chat room while Jon begins the lengthy process of editing the show. Although I do not exactly turn into a pumpkin after midnight, my brain does begin to resemble a gourd.

Note   This is the fourth in a series of four blogs discussing the Windows PowerShell profile. The first blog, Understanding the Six PowerShell Profiles, appeared on Monday. The second blog, Deciding Between One or Multiple PowerShell Profiles debuted on Tuesday. The third blog in the series, Use a Central File to Simplify Your PowerShell Profile, appeared yesterday. For additional information about the Windows PowerShell profile, refer to  this collection of blogs on the Hey, Scripting Guy! Blog.  

Group similar functionality into a module

One of the main ways to clean up your Windows PowerShell profile is to group related items into modules. For example, suppose your Windows PowerShell profile contains a few utility functions such as the following:

All of these functions relate to the central theme of being utility types of functions. They are not specific to one technology, and they are in fact, helper functions that are useful in a wide variety of scripts and applications. It is also true that as useful as these utilities are, you might not need to use them everywhere, at all times. This is the advantage of moving the functionality into a module—you can easily load and unload them as required.

Where to store the profile module

There are two locations that are used by modules by default. The first is in the user module location in the users MyDocuments special folder. The following command points to the user module location on my computer.

Join-Path -Path $home -ChildPath documents\windowsPowerShell\Modules

C:\Users\ed.IAMMRED\documents\windowsPowerShell\Modules

Note   The user module location does not exist by default. It must be created when you store the first module. For more information, refer to the Hey, Scripting Guy! blog, How Do I work with Windows PowerShell Module Paths.

The second location is in the System32 directory hierarchy. The following code uses the Join-Path cmdlet and the $pshome automatic variable to retrieve the system module folder on my system.

Join-Path -path $PSHOME -ChildPath modules

C:\Windows\System32\WindowsPowerShell\v1.0\modules

If you run your system as a nonelevated user, do not use the user module location for modules that require elevation of privileges. This will be an exercise in futility because once you elevate the user account to include admin rights, your profile shifts to another location, and then you do not have access to the module you were attempting to access.

Therefore, it makes sense to store modules that require admin rights in the System32 directory hierarchy. Store modules that do not require admin rights in the user profile module location. When modules reside in one of the two default locations, Windows PowerShell automatically picks up on them and displays them when you use the ListAvailable command as seen here.

Get-Module –ListAvailable

However, this does not mean that you are limited to modules from only the default locations. If you are centralizing your Windows PowerShell profile and storing it on a shared network drive, it makes sense to likewise store the module (and the module manifest) in the shared network location.

Note   Keep in mind that the Windows PowerShell profile is a script, as is a Windows PowerShell module. Therefore, your script execution policy impacts the ability to run scripts (and to load modules) from a shared network location. Even if you have a script execution policy of unrestricted, if you have not added the network share to your trusted sites in Internet Explorer, you will be prompted each time you open Windows PowerShell. You can use Group Policy to set the Internet Explorer trusted sites for your domain, or you can add them manually. You may also want to examine code signing for your scripts

When you decide where to store your module (or modules), your Windows PowerShell profile mainly consists of a series of Import-Module commands. For example, the following commands comprise my Windows PowerShell ISE profile. The first four commands import four modules. The last three commands, call functions from various modules to back up the profile, add menu items to the Windows PowerShell ISE, and create a Windows PowerShell drive for the locations of my modules.

import-module PowerShellISEModule

import-module MenuModule

import-module SnippetModule

import-module copymodule

BackUp-Profile

Add-MenuItems

New-ModuleDrives

For more information about using a module to clean up your profile, refer to the Weekend Scripter blog, Clean Up Your PowerShell ISE Profile by Using a Module.

VG, that is all there is to using a module to simplify your Windows PowerShell profile. This also concludes Profile Week on the Hey, Scripting Guy! Blog. Join me tomorrow for a great blog by Bill Stewart about working with directory sizes. Bill is a moderator for the Hey, Scripting Guys! Forum, and he is a really great resource for the Windows PowerShell community.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Getting Directory Sizes in PowerShell

$
0
0

Summary: Guest blogger, Bill Stewart, discusses a Windows PowerShell function to determine folder size.

Microsoft Scripting Guy, Ed Wilson, is here. Guest blogger today is Bill Stewart. Bill Stewart is a scripting guru and a moderator for the Official Scripting Guys forum.

Here’s Bill…

You have probably asked this question hundreds of times, “How big is that folder?” Of course, the typical GUI way to find out is to right-click the folder in Windows Explorer and open the folder’s properties. As with all things GUI, this does not scale well. For example, what if you need the size for 100 different folders?

If you have worked with Windows PowerShell for any length of time, you know that it provides a pretty comprehensive set of tools. The trick is learning how to combine the tools to get the results you need. In this case, I know that Windows PowerShell can find files (Get-ChildItem), and I know that it can count objects and sum a property on objects. A simple example would be a command like this:

Get-ChildItem | Measure-Object -Sum Length

Get-ChildItem outputs a list of items in the current location (in files and folders, if your current location is in a file system), and Measure-Object uses this list as input and adds together every input object’s Length property (file size). In other words, this command tells you the count and sum of the sizes of all the files in the current directory.

The Measure-Object cmdlet outputs an object with five properties (including Count, Average, and Sum). However, we only care about the Count and Sum properties, so let us refine our command a bit:

Get-ChildItem | Measure-Object -Sum Length | Select-Object Count, Sum

Now we are using Select-Object to select (hence the name) only the two properties we care about. The end result is a new output object that contains only those two properties.

This is good as far as it goes, but I wanted my output object to include the directory’s name. In addition, while we are at it, let us use the names “Files” and “Size” instead of “Count” and “Sum.” To do this, I am going to output a custom object like this:

$directory = Get-Item .

$directory | Get-ChildItem |

  Measure-Object -Sum Length | Select-Object `

    @{Name="Path"; Expression={$directory.FullName}},

    @{Name="Files"; Expression={$_.Count}},

    @{Name="Size"; Expression={$_.Sum}}

I need $directory as a separate variable so I can include it in the output object. In addition, you can see here that I am using Select-Object with a set of hash tables as a shorthand technique for creating a custom output object.

In the following image (Figure 1), you can see the output from all three of the commands.

Image of command output

The output does not look that great, but remember that the presentation is less important than the content: We are outputting objects, not text. Because the output is objects, we can sort, filter, and measure.

To get the output we want (include the path name and change a couple of the property names), the commands can start to get a bit lengthy. So it makes sense to encapsulate the needed code in a script. Get-DirStats.ps1 is the script, and its syntax is as follows:

Get-DirStats [[-Path] <Object>] [-Only] [-Every] [-FormatNumbers] [-Total]

or

Get-DirStats -LiteralPath <String[]> [-Only] [-Every] [-FormatNumbers] [-Total]

As you can see, you can run the script by using two sets of mutually exclusive parameters. Windows PowerShell calls these parameter sets. The parameter sets’ names (Path and LiteralPath) are defined in the statement at the top of the script, and the script’s CmdletBinding attribute specifies that Path is the default parameter set.

The Path parameter supports pipeline input. Also, the Path parameter is defined as being first on the script’s command line, so the Path parameter name itself is optional. The LiteralPath parameter is useful when a directory name contains characters that Windows PowerShell would normally interpret as wildcard characters (the usual culprits are the [ and the ] characters.) The Path and LiteralPath parameters are in different parameter sets, so they’re mutually exclusive.

The Only parameter calculates the directory size only for the named path(s), but not subdirectories (like what is shown in Figure 1). Normally, when we ask about the size of a directory, we’re asking about all of its subdirectories also. If you only care about counting and summing the sizes of the files in a single directory (but not its subdirectories), you can use the Only parameter.

The Every parameter outputs an object for every subdirectory in the path. Without the Every parameter, the script outputs an object for the first level of subdirectories only. The following image shows what I mean.

Image of menu

If we use the following command, the script will output an object for every directory in the left (navigation) pane (if we expand them all, as shown in the previous image).

Get-DirStats -Path C:\Temp -Every

If we omit the Every parameter from this command, the script will only output the directories in the right pane. The script will still get the sizes of subdirectories if you omit the Every parameter; the difference is in the number of output objects.

The FormatNumbers parameter causes the script to output numbers as formatted strings with thousands separators, and the Total parameter outputs a final object after all other output that adds up the total number of files and directories for all output. These parameters are useful when running the script at a Windows PowerShell command prompt; but you shouldn’t use them if you’re going to do something else with the output (such as sorting or filtering) because the numbers will be text (with FormatNumbers) and there will be an extra object (with Total). The following image  shows an example command that uses the  FormatNumbers and Total parameters with US English thousands separators.

Image of command output

Get-DirStats.ps1 supports pipeline input, so it uses the Begin, Process, and End script blocks. The script uses the following lines of code within the Begin script block to detect the current parameter set and whether input is coming from the pipeline:

$ParamSetName = $PSCmdlet.ParameterSetName

if ( $ParamSetName -eq "Path" ) {

  $PipelineInput = ( -not $PSBoundParameters.ContainsKey("Path") ) -and ( -not $Path )

}

elseif ( $ParamSetName -eq "LiteralPath" ) {

  $PipelineInput = $false

}

The script uses the $ParamSetName and $PipelineInput variables later in the Process script block. The logic behind the definition of the $PipelineInput variable is thus: “If the Path parameter is not bound (that is, it was not specified on the script’s command line), and the $Path variable is $null, then the input is coming from the pipeline.” Both the $ParamSetName and $PipelineInput variables are used in the script’s Process script block.

The beginning of script’s Process script block has the following code:

if ( $PipelineInput ) {

  $item = $_

}

else {

  if ( $ParamSetName -eq "Path" ) {

    $item = $Path

  }

  elseif ( $ParamSetName -eq "LiteralPath" ) {

    $item = $LiteralPath

  }

}

The $item variable will contain the path that the script will process. Thus, if the script’s input is coming from the pipeline, $item will be the current pipeline object ($_); otherwise, $item will be $Path or $LiteralPath (depending on the current parameter set).

Next, Get-DirStats.ps1 uses the Get-Directory function as shown here:

function Get-Directory {

  param( $item )

 

  if ( $ParamSetName -eq "Path" ) {

    if ( Test-Path -Path $item -PathType Container ) {

      $item = Get-Item -Path $item -Force

    }

  }

  elseif ( $ParamSetName -eq "LiteralPath" ) {

    if ( Test-Path -LiteralPath $item -PathType Container ) {

      $item = Get-Item -LiteralPath $item -Force

    }

  }

  if ( $item -and ($item -is [System.IO.DirectoryInfo]) ) {

    return $item

  }

}

The Get-Directory function uses Test-Path to determine if its parameter ($item) is a container object and a file system directory (that is, a System.IO.DirectoryInfo object).

If the Get-Directory function returned $null, the script writes an error to the error stream by using the Write-Error cmdlet and exits the Process script block with the return keyword.

After validating that the directory exists in the file system, the script calls the Get-DirectoryStats function, which is really the workhorse function in the script. The Get-DirectoryStats function is basically a fancy version of the commands run in Figure 1. Here it is:

function Get-DirectoryStats {

  param( $directory, $recurse, $format )

 

  Write-Progress -Activity "Get-DirStats.ps1" -Status "Reading '$($directory.FullName)'"

  $files = $directory | Get-ChildItem -Force -Recurse:$recurse | Where-Object { -not $_.PSIsContainer }

  if ( $files ) {

    Write-Progress -Activity "Get-DirStats.ps1" -Status "Calculating '$($directory.FullName)'"

    $output = $files | Measure-Object -Sum -Property Length | Select-Object `

      @{Name="Path"; Expression={$directory.FullName}},

      @{Name="Files"; Expression={$_.Count; $script:totalcount += $_.Count}},

      @{Name="Size"; Expression={$_.Sum; $script:totalbytes += $_.Sum}}

  }

  else {

    $output = "" | Select-Object `

      @{Name="Path"; Expression={$directory.FullName}},

      @{Name="Files"; Expression={0}},

      @{Name="Size"; Expression={0}}

  }

  if ( -not $format ) { $output } else { $output | Format-Output }

}

This function uses the Write-Progress cmdlet to inform the user running the script that something’s happening, and it uses a combination of the Get-ChildItem, Where-Object, Measure-Object, and Select-Object cmdlets to output a custom object. Note the use of the scoped variables ($script:totalcount and $script:totalbytes). These are used with the script’s Total parameter, and they are output in the script’s End script block.

Drop this script into a directory in your path, and you can quickly find the sizes for directories in your file system. Remember that it outputs objects, so you can add tasks such as sort and filter, for example:

Get-DirStats -Path C:\Temp | Sort-Object -Property Size

This command outputs the size of directories in C:\Temp, sorted by size.

The entire script can be downloaded from the Script Repository.

~Bill

Thank you, Bill, for writing an interesting and useful blog. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: The Mission for “True-Up” Data Via Get-WmiObject

$
0
0

Summary: Guest blogger, Brian Wilhite, shares his experience using Windows PowerShell jobs to speed up collecting information from WMI.

Microsoft Scripting Guy, Ed Wilson, is here. This weekend, we will start with guest blogger, Brian Wilhite.

Brian Wilhite works as a Windows system administrator for a large health-care provider in North Carolina. He has over 15 years of experience in IT. In his current capacity as a Windows system administrator, he leads a team of individuals who have responsibilities for Microsoft Exchange Server, Windows Server builds, and management and system performance. Brian also supports and participates in the Charlotte PowerShell Users Group.
Twitter: Brian Wilhite

Take it away Brian…

Here in North Carolina, it’s getting warmer, the rain is falling, flowers are blooming, and nature is giving us a blast of pollen to last a lifetime—yes, spring is in full swing. Like most things that are common with this time of year, so is my organization’s Microsoft Enterprise Agreement “True-Up.” If you are like us, we gather information from various data sources, such as SCCM, Active Directory and/or a CMDB. I was on a mission, given to me by my manager, to track down all Windows Server installations in my organization’s domain. I began by querying Active Directory for computer objects that have a server operating system by using the Active Directory module in Windows PowerShell. To import the Active Directory module, I will have to, you guessed it, use Import-Module. It looks like this:

Image of command output

Note   Prior to running this cmdlet, to ensure that you have the Active Directory module available for import, you can run the following Windows PowerShell code:

            Get-Module - ListAvailable

If you do not have the Active Directory module available, you need to download and install the Remote Server Administration Tools from the Microsoft Download Center. In addition, for them to work on any domain earlier than Windows Server 2008, you need to install Active Directory Management Gateway Service.

Next, I use Get-ADComputer to query Active Directory for all computer objects that have “Server” in the OperatingSystem property, and I stored it as $AllServers. I also wanted to export this to a CSV file for review. Notice that I use the NoTypeInformation parameter. In my opinion, this should be the default; however, this parameter will not display the object data at the top of the CSV.

$AllServers = Get-ADComputer -Filter {OperatingSystem -like "*Server*"} -Properties OperatingSystem | Select Name, OperatingSystem

$AllServers | Export-Csv -Path C:\Users\Brian\Desktop\AllServers.csv -NoTypeInformation

When reviewing the data shown in the following image, I realized that Windows Server 2008 populates the Standard, Enterprise, and Datacenter versions correctly within Active Directory. However, Windows Server 2003 does not:

Image of menu

Because the “True-Up” requires us to count what specific versions we’re running, I need to go one step further to gather that information. When I thought about how I was going to gather this data, I pondered for a bit, and then I decided that I would use my good old friend WMI. Because I already had the counts for the servers running Windows Server 2008, and I had the data filtering in Excel, I would shift my focus and effort to the servers running Windows Server 2003. To isolate them from the $AllServers variable that I created earlier, I ran the following code, selecting only the Name property:

$2003ServersOnly = $AllServers | Where-Object {$_.OperatingSystem -eq “Windows Server 2003”} | Select -ExpandProperty Name

Now that I had all the names of servers running Windows Server 2003 in a variable to itself, I ran the following code “trying” to export the data to a CSV file:

Get-WmiObject -Class Win32_OperatingSystem -ComputerName $2003ServersOnly | Select CSName, Caption | Export-Csv -Path C:\Users\Brian\Desktop\2003Servers.csv -NoTypeInformation

This should work because the ComputerName parameter will accept an array of computer names. It is also quicker than piping to a Foreach-Object. However, about 20 minutes later, I noticed that the CSV file was not growing in size like it had earlier, so I terminated the one-liner. I gave it more thought, and I decided that I would try the Foreach-Object route to see where that would take me. That code follows:

$2003ServersOnly | ForEach-Object {Get-WmiObject -Class Win32_OperatingSystem -ComputerName $_} | Select CSName, Caption | Export-Csv -Path C:\Users\Brian\Desktop\2003Servers.csv -NoTypeInformation

That didn’t work either. About 20 minutes later, I noticed that it stalled once again. Back to the drawing board…

I gave it further thought, and I remembered a conversation that I had with a good friend of mine about the AsJob parameter for Get-WmiObject. It came about because of a function I wrote, Get-ComputerInfo, which serially queries a set of 8 or 9 WMI classes. He said that he made some modifications specifically around running Get-WmiObject -AsJob, so that the queries would run asynchronously, making the function execute faster. So I put that method into practice for the mission at hand. Instead of running the objects one at a time through the pipeline, why not create a job for each server and see if my one-liner continues to stall? Boy, was I in for a surprise on this one. I ran the following code:

$2003ServersOnly | ForEach-Object {Get-WmiObject -Class Win32_OperatingSystem -ComputerName $_ -AsJob}

With the snap of my fingers the job creation was off and running on over 1100 servers. It took maybe 30 seconds to create all the jobs. “Oh my…,” I thought in disbelief, “Did that just happen that fast? No way!” As it turns out, it most certainly did.

Image of command output

After the jobs finished scrolling by, I ran Get-Job to check the status. To my surprise, other than a few “Running” and “Failed” jobs, all were “Completed”:

Image of command output

Because all the jobs completed, I ran the following code to capture the data from the jobs and then exported the data to a CSV file: 

            $Jobs = Get-Job | Receive-Job | Select CSName, Caption

            $Jobs | Export-Csv -Path C:\Users\Brian\Desktop\2003Servers.csv -NoTypeInformation

Later I went back to find out exactly how long it took to kick the jobs off, and it appears that it ran in 17.267 seconds:

Image of command output

Now keep in mind that after  the jobs finished scrolling by within Windows PowerShell, I ran Get-Job and most had finished. So the total time taken to query that many servers was less than 25 to 30 seconds, that is AWESOME!

With this information, I was able to complete the “True-Up” mission that my manager had given me in a timely manner. I was so happy with the sheer “Power” of Windows PowerShell that I reached out to my friend and shared the story. And now I have the privilege of sharing it with the community. I enjoy working daily in Windows PowerShell, and I hope you do too. Thanks for taking the time to listen to me ramble on about the “Shell.”

~Brian

I don’t think you are rambling, Brian. In fact, I appreciate your enthusiasm. I can also tell you that the longer you work with Windows PowerShell, the greater that enthusiasm grows. I am constantly shouting out loud, “Dude, that is awesome!” So much so that the Scripting Wife simply ignores me—at least I think that is why she ignores me sometimes. Anyway, Brian, thank you so very much for taking the time to share your experience with the Windows PowerShell scripting community. Join me tomorrow for more Windows PowerShell goodness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Weekend Scripter: Use PowerShell to Manage Your Windows Azure Deployments

$
0
0

Summary: Guest blogger, Microsoft evangelist Brian Hitney, discusses using Windows PowerShell to create and to manage Windows Azure deployments.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have guest blogger, Brian Hitney. Brian is a developer evangelist at Microsoft. You can read more by Brian in his blog, Structure Too Big Blog: Developing on the Microsoft Stack.

Hi everyone, Brian Hitney here. I’m a developer evangelist at Microsoft, and I focus on Windows Azure. Recently, I was asked to give a presentation to the Windows PowerShell User Group in Charlotte, North Carolina about managing Windows Azure deployments with Windows PowerShell. I have not done too much work with Windows PowerShell and Windows Azure, so this was a perfect opportunity to learn!

Getting started with Windows Azure

To begin, let us briefly talk about Windows Azure. Windows Azure is the cloud computing platform by Microsoft. It is comprised of a number of offerings and services, the most obvious being application and virtual machine hosting. There are also storage facilities to store tabular data, binary objects such as documents, images, and so on. Of course, it includes SQL Azure, which is a fully managed, redundant SQL Server instance.  What makes cloud computing so attractive is that it scales based on your needs. You can run a small website easily in an extra small instance (2 cents/hour, or about $15.00/month), or you can hit a button to scale out to dozens of 8 CPU servers.  

Now that we know what it is, you will need an account. If you have a MSDN subscription, you are already there and you have some great benefits as part of your subscription. If not, you are still in luck because you can use Azure for free for 90 days.  

Note: Oh, and a shameless plug: If you want to learn about the cloud and would like a fun activity to do in the process, check out our @home with Windows Azure project or RockPaperAzure. Both are designed to be fun, hands-on exercises.

Now that we know what Windows Azure is and how to get an account, let’s talk about the portal. The current version of the Windows Azure Platform is Silverlight based, and you would typically upload your files, create new hosted services, and so on, directly through this portal. An application consists of two files: a package file (.cspkg) and a configuration file (.cscfg), usually created from within Visual Studio. As a platform-as-a-service, Windows Azure manages the operating system, IIS, and patches automatically. In the following portal screenshot, you will see I have a number of subscriptions, and within my MSDN subscription, I have two hosted services. You cannot tell from the screenshot without looking at the properties, but the top service (in the red box) is located in the North Central datacenter, and the bottom service (still coming online) is in the West Europe datacenter. There are four datacenters in North America, two in Europe, and two in Asia.

Image of menu

Adding Windows PowerShell to the mixture

Phew! We are finally ready to talk scripting! In addition to the portal, Windows Azure offers a REST-style management API that we can leverage (it is secured with X.509 certificates). Writing against this directly, while not impossible, is not a very fun task. Fortunately, there are Windows Azure PowerShell Cmdlets on CodePlex that wrap the complexity into simple to use cmdlets for creating and managing our Azure services.

Set up the project and upload certificate

The first step is to upload an X.509 certificate to the Windows Azure Management Portal. In a typical enterprise or managed environment, certificate requests would be managed through a central IT department with an internal certification authority (CA). For individual use, it’s simply easiest to create self-signed certificates. There is a lot of documentation about the various ways to do this, so we won’t go into that here. Check out How to Create a Management Certificate for Windows Azure or Windows Azure PowerShell Cmdlets for more information about creating a management certificate.

Developers will typically be working with Visual Studio, and there are a number of project templates for cloud applications. The templates make it easy to build a cloud app from the ground up, but it’s also easy to create an empty cloud project, and bring preexisting websites and applications into the cloud project. In many cases, little or no code change is necessary to an ASP.NET-driven website. To download the SDK for Visual Studio, or to check out the other toolkits available, visit Windows Azure Downloads. As mentioned previously, a Windows Azure application will compile to two files, a package file and a configuration file. For sake of convenience, they are attached at the end of this blog in a zip file. They can also easily be created in Visual Studio by right-clicking the Solution Explorer and selecting Package as shown in the following image.

Image of menu

Setting up a new hosted service

In this demo, we will set up a new hosted service by using Windows PowerShell. In fact, we’ll do this in two separate datacenters (North Central and West Europe). We’ll also create a storage account for each application that is in the same datacenter. It’s typically a best practice to have at least one storage account in each datacenter where an application is hosted for performance and reduced bandwidth charges.

Set up the variables

To begin, we will set up some variables for our script:

Import-Module WAPPSCmdlets

 

$subid = "{your subscription id}"

$cert = Get-Item cert:\CurrentUser\My\01784B3F26B609044A56AC5B1CFEF287321420F5

$storageaccount = "somedemoaccount"

$storagekey = "{your key}"

 

$servicename_nc = "bhitneypowershellNC"

$servicename_we = "bhitneypowershellWE"

 

$storagename_nc = "bhitneystoragenc"

$storagename_we = "bhitneystoragewe"

 

$globaldns = "bhitneyglobal"

Your subscription ID can be obtained from the portal by clicking your account (shown earlier in the green box in the dashboard). You can obtain the thumbprint of a management certificate directly from your local certificate store (or by examining the file, if not in the local store). Or as shown here, you can look at the thumbprint in the portal by clicking Management Certificates and looking at the properties after you select the certificate:

Image of menu

In the following image, the somedemoaccount storage account is my “master storage account.” It’s where I’ll dump my diagnostics data (more on that in another blog post) and all of my global data that isn’t specific to a single deployment. Every storage account has a name (for example, somedemoaccount) and two access keys, which can be obtained from the dashboard also.

Image of menu

However, we will actually be creating storage accounts in script, too.

The $servicename variables are simply DNS names that will be created for our deployment. When they are deployed, the North Central deployment will have an addressable URL as bhitneypowershellNC.cloudapp.net, and the West Europe URL will be bhitneypowershellWE.cloudapp.net.

The $storagename variables are the names of the storage accounts we’ll be creating for these deployments. Although not completely necessary in this context, I’m including this primarily as an example.

Persist your profile information

One of the neat things you can do with the Windows Azure PowerShell cmdlets is save the subscription and related profile information into a setting that can be persisted for other scripts:

# Persisting Subscription Settings

Set-Subscription -SubscriptionName powershelldemo -Certificate $cert -SubscriptionId $subid

# Setting default Subscription

Set-Subscription -DefaultSubscription powershelldemo

# Setting the current subscription to use

Select-Subscription -SubscriptionName powershelldemo

# Save the cert and subscription id for subscriptions

Set-Subscription -SubscriptionName powershelldemo -StorageAccountName $storageaccount -StorageAccountKey $storagekey

# Specify the default storage account to use for the subscription

Set-Subscription -SubscriptionName powershelldemo -DefaultStorageAccount $storageaccount

This means that we don’t have to constantly define the certificate thumbprint, keys, subscription IDs, and so on. They can be recalled by using Select-Subscription. This will make writing future scripts easier, and the settings can be updated in one place instead of having to modify every script that uses these settings.

Set up the storage account and hosted service

Now let’s set up the storage account and hosted service:

# Configure North Central location

New-StorageAccount -ServiceName $storagename_nc -Location "North Central US"           | Get-OperationStatus –WaitToComplete

New-HostedService -ServiceName $servicename_nc -Location "North Central US" | Get-OperationStatus –WaitToComplete

 

New-Deployment -serviceName $servicename_nc –StorageAccountName $storagename_nc `

            -Label MySite `

            -slot staging -package "D:\powershell\package\PowerShellDemoSite.cspkg" –configuration "D:\powershell\package\ServiceConfiguration.Cloud.cscfg" | Get-OperationStatus -WaitToComplete

 

Get-Deployment -serviceName $servicename_nc -Slot staging `

            | Set-DeploymentStatus -Status Running | Get-OperationStatus -WaitToComplete

 

Move-Deployment -DeploymentNameInProduction $servicename_nc -ServiceName $servicename_nc -Name MySite

We’re creating a new storage account, and because the operation is asynchronous, we’ll pipe Get-OperationStatus –WaitToComplete to have the script wait until the operation is done before continuing.  Next, we will create the hosted service. The hosted service is the container and the DNS name for our application. When this is done, the service is created; however, nothing is yet deployed. Think of it as a reservation.

Deploy the code

We will deploy the code by using the New-Deployment command. In this case, we’ll deploy the service (note the paths to package files) to the staging slot with a simple label of MySite. Each hosted service has a staging and production slot. Staging is billed and treated the same as production, but staging is given a temporary URL to be used as a smoke test prior to going live.

By default, the service is deployed, but in the stopped state, so we’ll set the status to running with the Set-DeploymentStatus command. When it is running, we’ll move it from staging to production with the Move-Deployment command.  

We’ll also repeat the same thing for our West Europe deployment. Let’s say, though, that we’d like to programmatically increase the number of instances of our application. That’s easy enough to do:

#Increase the number of instances to 2

Get-HostedService -ServiceName $servicename_nc | `

       Get-Deployment -Slot Production | `

       Set-RoleInstanceCount -Count 2 -RoleName "WebRole1"

This is a huge benefit of scripting! Imagine being able to scale an application programmatically either to a set schedule (Monday-Friday, or perhaps during the holiday or tax seasons), or based on criteria such as performance counters or site load.

When we have more than one instance of an application in a given datacenter, Windows Azure will automatically load balance incoming requests to those instances. But let’s set up a profile by using the Windows Azure Traffic Manager to globally load balance.

# Set up a geo-loadbalance between the two using the Traffic Manager

$profile = New-TrafficManagerProfile -ProfileName bhitneyps `

                         -DomainName ($globaldns + ".trafficmanager.net")

 

$endpoints = @()

$endpoints += New-TrafficManagerEndpoint -DomainName ($servicename_we + ".cloudapp.net")

$endpoints += New-TrafficManagerEndpoint -DomainName ($servicename_nc + ".cloudapp.net")

 

# Configure the endpoint Traffic Manager will monitor for service health

$monitors = @()

$monitors += New-TrafficManagerMonitor –Port 80 –Protocol HTTP –RelativePath /

 

# Create new definition

$createdDefinition = New-TrafficManagerDefinition -ProfileName bhitneyps -TimeToLiveInSeconds 300 `

                                    -LoadBalancingMethod Performance -Monitors $monitors -Endpoints $endpoints -Status Enabled

                                   

# Enable the profile with the newly created traffic manager definition

Set-TrafficManagerProfile -ProfileName bhitneyps -Enable -DefinitionVersion $createdDefinition.Version

This is more straightforward than it might seem. First we are creating a profile, which is essentially asking what global DNS name we would like to use. We can CNAME our own DNS name (such as www.mydomain.com) if we’d like, but the profile will have a *.trafficmanager.net name. Next we are setting up endpoints. In this case, we are telling it to use both the North Central and West Europe deployments as endpoints.  

Set up the monitor

Next, we will set up a monitor. In this case, the Traffic Manager will watch Port 80 of the deployments, requesting the root (“/”) document. This is equivalent to a simple HTTP GET of the webpage, but it opens up possibilities for custom monitoring pages. If these requests generate an error response, the Traffic Manager will stop sending traffic to that location.

When it is complete, we can browse to this application by going to “bhitneyglobal.trafficmanager.net.” There you have it—from nothing to a geo-load-balanced, redundant, scalable application in a few lines of script!

Now let’s tear it all down!

# Cleanup

Remove-TrafficManagerProfile bhitneyps | Get-OperationStatus -WaitToComplete

Remove-Deployment -Slot production -serviceName $servicename_nc

Remove-Deployment -Slot production -serviceName $servicename_we | Get-OperationStatus -WaitToComplete

Remove-HostedService -serviceName $servicename_nc

Remove-HostedService -serviceName $servicename_we | Get-OperationStatus -WaitToComplete

Remove-StorageAccount -StorageAccountName $storagename_nc

Remove-StorageAccount -StorageAccountName $storagename_we | Get-OperationStatus –WaitToComplete

The full script and zip download can be found in the Script Repository.

Happy scripting!

~Brian

Thank you, Brian, for sharing your time and knowledge. This is an awesome introduction to an exciting new technology.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Aid in Security Forensics

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses using Windows PowerShell to aid in security forensic analysis of processes and services on a compromised system.

Hey, Scripting Guy! Question Hey, Scripting Guy! It seems that somewhere I read that you have your CISSP certification, so I expect that you know about security. I am wondering, do you know anything about using Windows PowerShell for forensic analysis of potentially compromised servers?

—RM

Hey, Scripting Guy! Answer Hello RM,

Microsoft Scripting Guy, Ed Wilson, is here. Today is a holiday in the United States, so I decided to celebrate by getting up early and checking the email sent to scripter@microsoft.com. The Scripting Wife is also up (it is always a little eerie when she is up before noon on her day off). I have no idea what she is up to, but she did volunteer to make a pot of tea for me. She mixed up a bit of English Breakfast tea, a half spoon of organic hibiscus flower, and a half spoon of lemon grass. She topped it off with a cinnamon stick. I have to say she did an excellent job. If you are anywhere near Atlanta, you should definitely make plans to attend the Atlanta TechStravaganza. I am making two presentations, and there is an entire Windows PowerShell track (the Scripting Wife will be there too). One thing that is unique this year (although they did it during the SQL Saturday in Atlanta) is Hal Rottenberg and Jon Walz will record the PowerScripting Podcast live during the last session of the day. As shown in the photo that follows, we had a TON of fun during SQL Saturday, and the session during the Atlanta TechStravaganza should be no different.

Photo

PowerShell is PowerShell is PowerShell

RM, the key thing to remember, whether you are doing security forensics, Exchange Server administration, Office automation, or anything between, is that Windows PowerShell is Windows PowerShell is Windows PowerShell. This means that all of the Windows PowerShell best practices still apply. One of those Windows PowerShell best practices is to preserve the object. The object-oriented nature of Windows PowerShell is one of the revolutionary features of the language, and it is a major contributor to its ease-of-use.

Note   When doing any type of computer forensics, a major principle is to avoid making any changes to the system. Therefore, as a crucial first step, you should use a tool such as the Windows Sysinternals utility tool, Disk2vhd, so you can be assured of not changing things like file access times on the original system.

Therefore, in keeping with the object-oriented nature of Windows PowerShell, you want to use techniques that preserve the object for as long as possible.

Preserving process and service objects

I feel a bit sad when I see people saving process or service information to a text file, and then watch as they create a hundred-line script that invokes various complex regular expressions to parse each file as they attempt to compare two files to pull out information. With security work, what you accomplish prior to a problem arising determines what you can do after the problem manifests itself. For example, if you have obtained periodic baselines, you have a great reference for comparison as you attempt to discover the differences between your pristine system and the compromised machine with which you now work. If you do not have a baseline, the next best thing is to build a duplicate system (hopefully from some backup device), and use that as your reference point.

So what am I talking about? Well, remember that you want to preserve objects so further analysis is easy. The best way to preserve process objects and service objects is to export them to XML. The following two commands export process and service information to an XML format (and thus preserve their object-oriented nature). The commands take a few seconds to work because the file is stored on a network file share.

Get-Process | Export-Clixml -Path \\hyperv1\shared\forensics\EDPROC.XML

Get-Service | Export-Clixml -Path \\hyperv1\shared\forensics\EDService.XML

Comparing process and service information

In doing security forensics, it is typical to want to compare one “snapshot” with another “snapshot.” What you need to do is to compare the baseline (or what you are using for your baseline) with the information that is gathered from the compromised system. The following command reads the baseline of process information into a variable, and it also reads the delta snapshot into a variable. Next, it uses Compare-Object to find the difference between the two process snapshots.

$edproc = Import-Clixml -Path \\hyperv1\shared\Forensics\EDPROC.XML

$edproc1 = Import-Clixml -Path \\hyperv1\shared\Forensics\EDPROC1.XML

Compare-Object $edproc $edproc1 -Property processname

The commands and the output associated with running the three commands are shown in the image that follows.

Image of command output

If you only need to determine the number of changes between the baseline and the delta snapshot, pipe the results to the Group-Object cmdlet. This technique is shown here.

PS C:\> Compare-Object $edproc $edproc1 -Property processname | group sideindicator

 

Count Name                      Group

----- ----                      -----

    2 =>                        {@{processname=notepad; SideIndicator==>}, @{proc...

    1 <=                        {@{processname=audiodg; SideIndicator=<=}}

Because you are working with rich objects (not mere text), the normal properties of the object continue to exist.

To drill in on one of the newly created processes, pipe the variable that contains the new process to the Where-Object, and pick it out of the collection. To view all of the information about the process, send the results to the Format-List cmdlet. That command is shown here.

$edproc1 | where { $_.processname -eq 'notepad'} | fl *

This technique is illustrated in the image that follows.

Image of command output

A number of properties from the process object appear to merit additional investigation. Some of the properties that interest me are StartTime, PrivilegedProcessorTime, Path, FileVersion, and the memory information.

Some of the properties contain other objects, such as the Threads property. To examine that more fully, store resultant object in a variable, and then drill down into it. This technique is shown here.

$notepad = $edproc1 | where { $_.processname -eq 'notepad'}

$notepad.Threads

The commands and the associated output from the commands are shown here.

Image of command output

Use the same technique to compare your service information.

Note   When you take your snapshots (either the baseline or the delta), ensure that you use administrative rights. This is because some information is not available to a non-privileged account.

RM, that is all there is to using Windows PowerShell to compare service and process information. Security Week will continue tomorrow when I will talk about using Windows PowerShell to analyze Security event logs.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Perform Offline Analysis of Security Logs

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses using Windows PowerShell to dump and to analyze event logs—including security logs.

Hey, Scripting Guy! Question Hey, Scripting Guy! I often need to process Windows event logs when I am called to do a forensic investigation of a server. One of the problems with saving the event log so that I can look at it later is that I end up with a log that is pretty well useless because I do not have the required application DLLs installed on my computer. Is there a way that Windows PowerShell can help me do this without me having to install a bunch of admin tools on my laptop?

—RG

Hey, Scripting Guy! Answer Hello RG,

Microsoft Scripting Guy, Ed Wilson, is here. I have been preparing my presentations for the Atlanta TechStravaganza, which happens this Friday June 1, 2012 at the Microsoft Office in Alpharetta. The Scripting Wife is busy getting ready—she made an appointment at the spa for a pedicure, manicure, and massage. I think she is going to the spa with a group of her friends, and they may be going out for lunch and shopping afterwards. I am always a bit leery when shopping trips involve stopping for meals. For me a shopping trip generally involves about five minutes online (unless it is going to a computer super store…but that is different).

Note   This is the second in a series of four Hey, Scripting Guy! blogs about using Windows PowerShell to facilitate security forensic analysis of a compromised computer system. The intent of the series is not to teach security forensics, but rather to illustrate how Windows PowerShell could be utilized to assist in such an inquiry. The first blog discussed using Windows PowerShell to capture and to analyze process and service information.

First, dump the event logs

The first thing to do if you plan to perform detailed analysis of the security logs, is to dump them into a format that facilitates later processing with Windows PowerShell.

Note    If you save the event log as an EVTX type of file, when you open it back up in the Event Viewer utility tool, your machine may require certain DLL files to provide the replacement strings for the various events. These DLL files are typically installed with the various management utilities that come with certain applications.

To dump the event log, you can use the Get-EventLog and the Exportto-Clixml cmdlets if you are working with a traditional event log such as the Security, Application, or System event logs. If you need to work with one of the trace logs, use the Get-WinEvent and the ExportTo-Clixml cmdlets.

Note    For more information about working with event logs, review this collection of blogs about using Windows PowerShell to work with event logs because there is some great information. Two great blogs to begin with are:

To dump all of the events in the Application log to an XML file that is stored on a network share, use the following syntax:

Get-EventLog -LogName application | Export-Clixml \\hyperv1\shared\Forensics\edApplog.xml

If you want to dump the System, Application, and Security logs into XML files on a network share, use the following syntax.

Note    The % symbol is an alias for the Foreach-Object cmdlet. It is often used when working interactively from the Windows PowerShell console, although its use in a script would not necessarily be appropriate. For more information, read Best Practice for Using Aliases in PowerShell Scripts.

$logs = "system","application","security"

$logs | % { get-eventlog -LogName $_ | Export-Clixml "\\hyperv1\shared\Forensics\$_.xml" }

The previous commands, which retrieve the three classic event logs and export them in XML format to a network share, and the associated output (no output) are shown in the image that follows.

Image of command output

Second, import the event log of interest

To parse the event logs, use the Import-Clixml cmdlet to read the stored XML files from your shared network location. Store the results in a variable. Next, you can use any of the normal Windows PowerShell cmdlets you would use when parsing event logs (Where-Object, Group-Object, and Select-Object are three of the main cmdlets that I use). The following two commands first read the exported security log contents into a variable named $seclog, and then the five oldest entries are obtained.

$seclog = Import-Clixml \\hyperv1\shared\Forensics\security.xml

$seclog | select -Last 5

One thing you must keep in mind is that once you export the security log to XML, it is no longer protected by anything more than the NFTS and share permissions that are assigned to the location where you store everything. By default, an ordinary user does not have permission to read the security log. As seen in the previous image, when you start the Windows PowerShell console as an administrator, all event logs are dumpable.

The following image illustrates this situation. In a Windows PowerShell console launched as a normal non-elevated user, the command to read the Security event log fails with an access denied error message. The next command reads the security event log from the stored XML, stores the resulting event log into a variable, and then displays the five oldest entries from the log.

Image of command output

It is, therefore, imperative that the appropriate NTFS permissions and share permissions level protect the offline versions of these event logs.

Drill into a specific entry with Format-List

To view the entire contents of a specific event log entry, choose that entry, send the results to the Format-List cmdlet, and choose all of the properties. This technique is shown here.

$seclog | select -first 1 | fl *

The contents of a specific event log entry varies, depending on the provider. The following audit entry details the special privileges that are assigned to a new login. (Interestingly enough, this specific entry was the result of opening the Windows PowerShell console with administrative rights.)

Image of command output

The message property contains the SID, account name, user domain, and privileges that are assigned for the new login. This property is a string, and therefore, it can be parsed by using normal string techniques. The image that follows illustrates obtaining only the message property from the newest event log entry, and using the gettype method to determine the exact data type. These are the two commands that I used:

($seclog | select -first 1).message

(($seclog | select -first 1).message).gettype()

The commands and the output associated with the commands are shown in the following image.

Image of command output

At times, you merely need to do a bit of analysis work: How often is the SeSecurityPrivilege privilege mentioned in the message property? To obtain this information, pipe the contents of the security log to a Where-Object to filter the events, and then send the results to the Measure-Object cmdlet to determine the number of events, as shown here:

$seclog | ? { $_.message -match 'SeSecurityPrivilege'} | measure

If you want to ensure that only event log entries return that contain SeSecurityPrivilege in their text, use Group-Object to gather the matches by the EventID property. The command that does this is shown here:

$seclog | ? { $_.message -match 'SeSecurityPrivilege'} | group eventid

Because importing the event log into a variable from the stored XML results in a collection of event log entries, it means that the count property is also present. Use the count property to determine the total number of entries in the event log as shown here.

$seclog.Count

The use of these three commands and the output associated is shown in the image that follows.

Image of command output

Dig into the logs with Windows PowerShell standard techniques

I like to examine an event log for event distribution. This answers the question, “What is causing all of these events?” And it provides a direction for further analysis. The easy way to do this is to use the Group-Object cmdlet and the Sort-Object cmdlet. The use of these two cmdlets is shown here (remember that the variable $seclog contains the offline security log that we dumped at the beginning of this blog, and after storing the log in XML format, we reconstituted it by using the Import-CliXml cmdlet). When I am merely attempting to get an idea of event distribution, I use the NoElement switch, which removes the grouping information from the output.

$seclog | group eventid -NoElement | sort count

The command to group the security events by event ID, and the results from the command are shown in the following image.

Image of command output

Interestingly enough, this image tells me that there are 2280 events with the event ID of 4672. But of those 2280 events, we know that only 1925 entries include SeSecurityPrivilege being granted. What does all this mean? Who knows, but at least with Windows PowerShell you have the tools at your fingertips to surface this information and you have your experience as an IT Pro to draw your own conclusions.

RG, that is all there is to using Windows PowerShell to dump event logs and to perform offline analysis. Security Week will continue tomorrow when I will talk about MD5 hash analysis of files and folders.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>