Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Scope in Action

$
0
0

Summary: Microsoft PowerShell MVPs, Don Jones and Jeffery Hicks, talk about the impact of Windows PowerShell scope when creating tools.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from Learn PowerShell Toolmaking in a Month of Lunches
  By Don Jones and Jeffery Hicks

Photo of book cover

More than likely, your toolmaking projects will be on the complex side, and if you don’t understand scope, you may end up with a bad tool. Scope is a system of containerization. In some senses, it’s designed to help keep things in Windows PowerShell from conflicting with one another. In this blog, which is based on Chapter 5 of Learn PowerShell Toolmaking in a Month of Lunches, you get to see scope in action

If you ran a script that defined a variable named $x, you’d be pretty upset if some other script also used $x and somehow messed up your script. Scope is a way of building walls between and around different scripts and functions, so that each one has its own little sandbox to play in without fear of messing up something else.

There are several elements within Windows PowerShell that are affected by scope:

  • Variables
  • Functions
  • Aliases
  • PSDrives
  • PSSnapins (but oddly not modules, so as things migrate mainly to modules and away from PSSnapins, this won’t matter much)

The shell itself is the top-level, or global, scope. That means that every new Windows PowerShell window you open is an entirely new, standalone, global scope—with no connection to any other global scope. The ISE lets you have multiple global scopes within the same window, which can be a bit confusing. In the ISE, when you click the New PowerShell tab on the File menu, you’re creating a new Windows PowerShell runspace—which is equivalent to opening a new console window. Each of those tabs within the ISE is its own global scope. The following image shows what that looks like in the ISE. Note that it’s the top, rectangular tabs that represent separate global scopes. The rounded tabs that hold script files live within that runspace, or global scope.

Image of menu

Each script that you run creates its own script scope. If a script calls a script, the second script gets its own script scope. Functions have their own scope, and a function that contains a function gets its own scope. As you can imagine, this can result in a pretty deep hierarchy, which the following image illustrates with a few global scope examples. There’s even terminology for the scopes’ relationships: a scope’s containing scope is called its parent; any scopes contained within a scope are its children. So the global scope is only a parent (because it’s the top-level scope), and it contains children.

Image of scope examples

So here’s the deal: If you create a variable within a script, that variable belongs to that script’s scope. Everything inside that same scope can “see” that variable and its contents. The scope’s parent can’t see the variable.

Any child scopes, however, have an interesting behavior. Imagine a script named C:\Tools.ps1, in which we create a variable named $computer. Within that script, we have a function named Get-OSInfo. (Sound familiar?) If Get-OSInfo attempts to access the contents of $computer, the operation will work.

But if Get-OSInfo attempts to change the contents of $computer, it will create a new variable, also named $computer, within its own scope. From then on, the function will be accessing its private version of $computer, which will be independent of the $computer in the script scope. This, as you imagine, can be crazy confusing, so let’s see it in action to clarify. The following listing is a script that will help demonstrate scope. 

Listing 1: Script.ps1 demonstrating scope 

$var = 'hello!'

 

function My-Function {

    Write-Host "In the function; var contains '$var'"

    $var = 'goodbye!'

    Write-Host "In the function; var is now '$var'"

}

 

Write-Host "In the script; var is '$var'"

Write-Host "Running the function"

My-Function

Write-Host "Function is done"

Write-Host "In the script; var is now '$var'"

Let’s run that and check out the results:

PS C:\> .\script.ps1

In the script; var is 'hello!'

Running the function

In the function; var contains 'hello!'

In the function; var is now 'goodbye!'

Function is done

In the script; var is now 'hello!'

Try it now… 

Please, definitely run this script on your own—we want you to see the results for real, right in front of your eyes. It’ll make it all clearer.

Read through the script’s output. Notice that at the start of the script, $var contains hello! because that’s what the first line in the script set it to. Then the function runs, and it sees that $var contains hello! That’s because $var doesn’t exist in the function’s scope, so when it tries to access $var, Windows PowerShell goes to the scope’s parent. Lo and behold, there’s a $var there! So that’s what the function sees.

But then the function assigns goodbye! to $var. Windows PowerShell sees that $var still doesn’t exist in the function’s scope, so it creates the variable and puts goodbye! into it. There are now two copies of $var running around: one in the function’s scope and one (which still contains hello!) in the script’s scope. The global scope is still clueless about either of these; there’s no $var in its scope, and it can’t see the variables of its child scopes.

You have seen that scope creates walls between scripts and functions so that each one has space to play—messing up something else. The elements within Windows PowerShell that are affected by scope are variables, functions, aliases, PSDrives, and PSSnapins. We demonstrated that if you create a variable within a script, that variable belongs to that script’s scope. Everything inside that same scope can see that variable and its contents, but the scope’s parent can’t see the variable.

~Don and Jeffrey

Here is the code for the discount offer today at www.manning.com: scriptw7
Valid for 50% off Learn PowerShell Toolmaking in a Month of Lunches and SharePoint 2010 Owner's Manual
Offer valid from April 7, 2013 12:01 AM until April 8, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Do One Thing and Do It Well

$
0
0

Summary: Microsoft PowerShell MVPs, Don Jones and Jeffery Hicks, talk about a fundamental tool design consideration.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from Learn PowerShell Toolmaking in a Month of Lunches 
  By Don Jones and Jeffery Hicks

Photo of book cover

Here’s a basic tenant of good Windows PowerShell tool design: do one thing, and do it well. Broadly speaking, a function should do one—and only one—of these things:

  • Retrieve data from someplace
  • Process data
  • Output data to some place
  • Put data into some visual format meant for human consumption

This fits well with the command-naming convention in Windows PowerShell: If your function uses the verb Get, that’s what it should do: get. If it’s outputting data, you name it with a verb like Export, or Out, or something else. If each command (okay, function) worries about just one of those things, then they’ll have the maximum possible flexibility.

For example, let’s say we want to write a tool that will retrieve some key operating system information from multiple computers and then display that information in a nicely formatted onscreen table. It’d be easy to write that tool so that it opened Active Directory, got a bunch of computer names, queried the information from them, and then formatted a nice table as output.

The problem?

Well, what if tomorrow we didn’t want the data on the screen but rather wanted it in a CSV file? What if one time we needed to query a small list of computers rather than a bunch of computers from the directory? Either change would involve coding changes, probably resulting in many different versions of our tool lying around. Had we made it more modular and followed the basic philosophy we just outlined, we wouldn’t have to do that. Instead, we might have designed the following:

  • One function that gets computer names from the directory
  • One function that accepts computer names, queries those computers, and produces the desired data
  • One function that formats data into a nice onscreen table

Suddenly, everything becomes more flexible. That middle function could now work with any source of computer names: the directory, a text file, or whatever. Its data could be sent to any other command to produce output. Maybe we’d pipe it to Export-CSV to make that CSV file or to ConvertTo-HTML to make an HTML page. What about the onscreen table we want right now? We’re betting Format-Table could do the job, meaning we don’t even have to write that third function at all—less work for us!

So let’s talk about function design. We’re going to suggest that there are really three categories of functions (or tools): input, functional, and output.

Input tools

Input tools are the functions that don’t produce anything inherently useful, but are rather meant to feed information to a second tool. So a function that retrieves computer names from a configuration management database is an input tool. You don’t necessarily want the computer names, but there might be an endless variety of other tools that you want to send computer names to—including any number of built-in Windows PowerShell commands.

Here’s a good example of how to draw a line between your functions. Let’s say you’re writing a hunk of commands intended to retrieve computer names from your configuration management database. Your intent today is to query some WMI information from those computers—but aren’t there other tools that need computer names as input? Sure! Restart-Computer accepts computer names. So does Get-EventLog, Get-Process, Invoke-Command, and a dozen more commands. That’s what suggests (to us, at least) that functionality for getting names from the database should be a standalone tool. It could potentially feed a lot more than only today’s current needs.

Windows PowerShell already comes with a number of input tools. Sticking with the theme of getting computer names, you might use Import-CSV, Get-Content, or Get-ADComputer to retrieve computer names from various sources. To us, this further emphasizes the fact that the task of getting computer names is a standalone capability, rather than being part of another tool.

Functional tools

This is the kind of tool you’ll be writing most often. The idea is that this kind of tool doesn’t spend time retrieving information that it needs to do its main job. Instead, it accepts that information via a parameter of some kind—that parameter being fed by manually entered data, by another command, and so on.

So if your functional tool is going to query information from remote computers, it doesn’t internally do anything to get those computers’ names; but instead, it accepts them on a parameter. It doesn’t care where the computer names come from—that’s another job.

When it’s been given the information it needs to operate, a functional tool does its job and then outputs objects to the pipeline. Specifically, it outputs a single kind of object, so that all of its output is consistent. This functional tool also doesn’t worry about what you plan to do with that output—it simply puts objects into the pipeline. This kind of tool doesn’t spend a nanosecond worrying about formatting, about output files, or about anything else. It does its job, perhaps produces some objects as output, and that’s it.

Note   Not all functional tools will produce output of any kind. A command that just does something—perhaps reconfiguring a computer—might not produce any output, apart from error messages if something goes wrong. That’s fine.

Output tools

Output tools are specifically designed to take data (in the form of objects), which has been produced by a functional tool, and then put that data into a final form. Let’s stress that: final form. We looked up final in our dictionary, and it says something like, “pertaining to or coming at the end; last in place, order, or time.” In other words, when you send your data to an output tool, you’re finished with it. You don’t want anything else happening to the data. You want to save it in a file or a database, or display it onscreen, or fax it to someone, or tap it out in Morse code…whatever. Windows PowerShell verbs for this include Export, Out, and ConvertTo, to name a few.

Consider the inverse of this philosophy: If you have a tool that’s putting data into some final form, like a text file or an onscreen display, that tool should be doing nothing else. Why?

Consider a function that we’ve created, named Get-ComputerDetails. This function gets a bunch of information from a bunch of computers. It then produces a pretty, formatted table on the screen. That’s a text-based display. Doing so means we could never do this:

Get-ComputerDetails | Where OSBuildNumber –le 7600 |

Sort ComputerName | ConvertTo-HTML | Out-File computers.html

Why couldn’t we do that? Because, in this example, Get-ComputerDetails is producing text. Where-Object, Sort-Object, and ConvertTo-HTML can’t deal with text—they deal with objects. Get-ComputerDetails has put our data into its final form, meaning—according to the dictionary—that Get-ComputerDetails is “coming at the end” and should be “last in place.” Nothing can come after it—meaning we have less flexibility.

A better design would have had Get-ComputerDetails produce only objects and to create a second command, perhaps called Format-MyPrettyDisplay, which handles the formatting. That way we could get our originally desired output:

Get-ComputerDetails | Format-MyPrettyDisplay

But we could also do this: 

Get-ComputerDetails | Where OSBuildNumber –le 7600 |

Sort ComputerName | ConvertTo-HTML | Out-File computers.html

This would allow us to change our minds about using Format-MyPrettyDisplay from time-to-time, instead sending our data objects on to other commands to produce different displays, filter the data, source the data, create files, and so on.

This blog discussed the basics of good Windows PowerShell tool design. A function should perform only one of the following actions:

  • Retrieve data from someplace
  • Process data
  • Output data to some place
  • Put data into a visual format meant for human consumption

We talked about three different categories of functions, or tools: input, functional, and output.

~Don and Jeffrey

Here is the code for the discount offer today at www.manning.com: scriptw7
Valid for 50% off Learn PowerShell Toolmaking in a Month of Lunches and SharePoint 2010 Owner's Manual
Offer valid from April 7, 2013 12:01 AM until April 8, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Using PowerShell Aliases: Best Practices

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, demystifies some of the confusion surrounding using Windows PowerShell aliases.

Microsoft Scripting Guy, Ed Wilson, is here. “Don’t use aliases!” I hear this all the time from various people on the Internet. I am constantly asked at Windows PowerShell user groups, at TechEd, and at community events such as Windows PowerShell Saturday, if it is alright to use an alias.

What’s the big deal anyway?

An alias is a shortcut name, or a nickname, for a Windows PowerShell cmdlet. It enables me to type a short name instead of a long name. Is this a big deal? You bet it is. Windows PowerShell ships with around 150 predefined aliases. The longest cmdlet name is 30 characters long. Yes, that is right—30 characters. This is a very long command name. If I have to type New-PSSessionConfigurationFile very many times, I am definitely going to seek an alias. Luckily, npsscis available to do the job.

Note  I used the following code to determine the length of cmdlet names and their associated aliases.

gal | select definition, @{label="length"; expression={$_.definition.length}} | sort length

What about tab expansion?

Tab expansion works to help to reduce the typing load when working with Windows PowerShell. One of the issues with tab expansion is that in Windows 8, there are over 2000 cmdlets and functions. Therefore, it takes more tabs to expand the correct cmdlet name. In the previous example, rather than having to type 30 characters to get access to the New-PSSessionConfigurationFile cmdlet, I can type New-Pand hit the Tab key a few times. On my laptop, I have to press the Tab key four times before New-PSSessionConfigurationFile appears on the command line.

Typing five characters with New-Pand pressing the Tab key four times is nine key strokes to enter the New-PSSessionConfigurationFile command. The npsscalias is only five key strokes, so I save four key strokes every time I use the alias instead of using Tab expansion.

Use aliases when working interactively at the console

For me, it is a best practice to use Windows PowerShell aliases when I am working interactively in the Windows PowerShell console. This is because it is the best way to reduce the amount of typing. It also reduces the amount of memorization needed. For example, is it Get-ChildItem or Get-ChildItems? I do not need to remember either one, because I can use LS, DIR, or GCI as an alias when calling that particular cmdlet.

Note One of the most basic mistakes I see with beginners who are just learning Windows PowerShell is that they do not use Tab expansion, nor do they use aliases. Instead they attempt to type the entire Windows PowerShell cmdlet name—and invariably, they get it wrong. So use aliases, or use tab expansion, but do not attempt to type the long cmdlet names.

In addition, if there is a Windows PowerShell cmdlet that I use on a regular basis that does not have a currently defined alias, I like to create an alias and store it in my Windows PowerShell profile. In this way, I make sure I have easy access to any Windows PowerShell cmdlet regardless of the length of the actual cmdlet name.

When not to use aliases

With all the goodness that aliases bring to the table, I might be inclined to use aliases all the time. There is a disadvantage, however. Aliases can be hard to read. Some aliases make sense: Sort for Sort-Object, Where for Where-Object. Others, such as sv, sbp, sc, and rv are rather obscure. One of the nice things about Windows PowerShell code is that it is very readable. Therefore, Get-Service does not need much explanation—it returns service information. But gsv, needs a bit of explanation before I know that it is an alias for Get-Service. So, what I gain in speed of typing, I loose in ease of understanding.

When working interactively at the Windows PowerShell console, the primary purpose is to accomplish something. I want to get the task completed accurately, and timely. I do not want to expend any extra effort to accomplish the task. When I close the Windows PowerShell console, everything I typed is lost (unless I have enabled the transcript or exported my command history).

On the other hand, when I write a Windows PowerShell script, the purpose is to have something I can use over and over again. So I am creating an artifact that has intrinsic value, and that I can use as a management tool. The goal here is reusability, not speed of development and execution. Therefore, I do not want to use aliases in my script because it hinders readability and understanding.

Note   A fundamental tenant of script development is that the better I can understand my script, the fewer errors it will contain, and the easier it will be to fix any errors that may arise. In addition, because scripts are reusable, it also will be easier to modify the script in the future. Time spent in script development is an investment in the future.

This does not mean I have to give up the ease of using aliases when I am writing a Windows PowerShell script. I wrote a function that I include in my Windows PowerShell profile that replaces all aliases with the actual cmdlet name: Modify the PowerShell ISE to Remove Aliases from Scripts. In this way, I have the ease of being able to use Windows PowerShell aliases, with the readability of full cmdlet names later.

Join me tomorrow when I will talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Easily Find PowerShell Aliases for Cmdlets

$
0
0

Summary: Learn how to easily find Windows PowerShell aliases for cmdlets.

Hey, Scripting Guy! Question How can I find if there is an alias for a specific cmdlet by using Windows PowerShell 3.0?

Hey, Scripting Guy! AnswerUse the Get-Alias cmdlet and the Definitionparameter to look for aliases for a specific cmdlet:

PS C:\> Get-Alias -Definition get-command

CommandType     Name                                               ModuleName

-----------     ----                                               ----------

Alias           gcm -> Get-Command

 

Using PowerShell Functions: Best Practices

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about some best practices for using Windows PowerShell functions.

Microsoft Scripting Guy, Ed Wilson, is here. Windows PowerShell functions are really powerful, and at the same time, they are incredibly simple to create. This makes Windows PowerShell functions flexible and functional. But this flexibility also means that there is a lot of misunderstanding.

A simple function

At the low end (in terms of readability, functionality, features, and so on), a Windows PowerShell function is creatable on a single line interactively at the Windows PowerShell console. The minimum number of elements required to create a function are:

  • The Function keyword
  • The name of the function
  • A script block

That is it. This means that the following is a legitimate function:

function a {}

It does not do anything, but it is legitimate. Running the code at a command prompt in the Windows PowerShell console creates the function. I can then pipe output to it, and even verify that it exists on the Windows PowerShell function PS drive. The following script illustrates these concepts.

PS C:\> function a {}

PS C:\> gps | a

PS C:\> dir function:a

 

CommandType     Name                                               ModuleName

-----------     ----                                               ----------

Function        a 

Adding functionality to the function

I often need to get a view of data, or a snapshot of data, before I return all of the data. Typically, I pipe the data to the Select-Object cmdlet and pick the last three entries in the data. The following script illustrates this technique (gps is an alias for the Get-Process cmdlet, and select is an alias for the Select-Object cmdlet).

PS C:\> gps | select -Last 3

 

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName

-------  ------    -----      ----- -----   ------     -- -----------

    207       9     1340       3064    40            1848 WUDFHost

    405      19     3636      10012    95            1904 WUDFHost

    214      18     3336       7852    89            2880 ZeroConfigService

I use this type of code when I am troubleshooting or simply perusing the status of a computer. Because I have established a pattern that pipes data to the Select-Object cmdlet and chooses the last three items, I can put this into a function that accepts pipelined input and outputs the last three items.

Because I am writing the function interactively in the Windows PowerShell console, and because I will be using it a lot, I give it a really short name. Here I call it “l” (as in lower case letter “L”). Inside the script block, I use the automatic variable $input, which is the input piped into a function.

The $input variable only exists inside the context of a function, and only for the time the function is called. If I check the value of $input outside of the function, it is empty. So what I pass to the function is then piped to the Select-Object cmdlet, and the last three items are returned from the function. The function is shown here.

function l {$input | select -Last 3}

To use the function, I pipe results to the function. The following script selects the last three processes.

PS C:\> gps | l

 

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName

-------  ------    -----      ----- -----   ------     -- -----------

    207       9     1340       3064    40            1848 WUDFHost

    405      19     3636      10012    95            1904 WUDFHost

    214      18     3336       7852    89            2880 ZeroConfigService

I can select the last three services (gsv is an alias) as shown here.

gsv | l

Or maybe I want to look at the last three entries in the event log as shown here.

Get-EventLog application | l

I can even use the range operator and select the last three numbers. This command is shown here.

1..10 | l

These commands and their associated output are shown in the following image.

Image of command output

Best Practices Week will continue tomorrow when I will continue talking about functions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Easily See the Content of Your Function

$
0
0

Summary: Use the Get-Content cmdlet to read the content of your Windows PowerShell function.

Hey, Scripting Guy! Question How can I see the content of a function that is on on my system, which I did not write?

Hey, Scripting Guy! Answer Use the Get-Content cmdlet and point it to the Function: PowerShell drive. The following example gets the content of the more function.

Get-Content Function:\more

Accepting Arguments for PowerShell Functions: Best Practices

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about the best practices surrounding accepting input for a Windows PowerShell function.

Microsoft Scripting Guy, Ed Wilson, is here. April in the Carolina’s is a special time. In fact, it is my favorite time of the year here. This is because the weather is invariably mild. This week, it has been sunny, moderate temperature, mild humidity, and clear skies. The Scripting Neighbors tell me it is perfect golf weather. It is also perfect “sit on the lanai and write Windows PowerShell scripts” weather. Although I have never had much luck with putting a small ball into an even smaller hole with equipment not designed for that purpose, I can compute the trajectory, and force necessary to accomplish the task with a one-line Windows PowerShell command.

Passing a value to a Windows PowerShell function

If I have a function that I need to pass a value to, I can use the automatic variable $args. This makes the function easy to write and easy to use. The following function uses three steps to create the function. It calls the function keyword, provides a name, and creates a script block that contains code.

function myfunction

{

 "the computer name is $args"

}

In the Windows PowerShell ISE, I run the script (I do not have to save the code into a .ps1 file), and the function loads into memory. I can then call the function directly in the execution pane (the dark blue box that follows) and pass a value to the function when I call it. The command line is shown here:

myfunction $env:COMPUTERNAME

The image that follows illustrates creating the function, using $args in the script block, and calling the function from the execution pane.

Image of command

Anything I add following the name of the function populates the $args variable. In the command that follows, I pass the value mredto the function. Interestingly, I do not have to supply quotation marks when passing the value.

PS C:\> myfunction mred

the computer name is mred

 

PS C:\> myfunction "mred"

the computer name is mred

I can also use the output from the Get-WmiObject cmdlet for input. Therefore, the following code uses WMI to return the computer name and to pass it to the MyFunctionfunction.

PS C:\> myfunction (gwmi win32_computersystem).name

the computer name is EDLT

One thing to keep in mind, is that when I use the $args automatic variable as illustrated in the MyFunctionfunction, I cannot pipe input to the function. This can actually be a bit of a problem, because it could be really hard to troubleshoot due to the fact that no error arises. This is shown here.

PS C:\> $env:COMPUTERNAME | myfunction

the computer name is 

If I want to pipel input to my function, I use the $input automatic variable. The only change that is required to my function is to change $args to $input, as shown here.

function afunction

{

 "the computer name is $input"

}

I then pipe the input to the function by using the command shown here.

PS C:\> $env:COMPUTERNAME | afunction

If I attempt to provide positional input to the function instead of piping the input, no error arises, but no value passes either.

The command and associated output are shown in the image here.

Image of command output

Best Practices Week will continue tomorrow when I will talk some more about Windows PowerShell functions.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Find All Windows PowerShell Functions

$
0
0

Summary: Easily find all the Windows PowerShell functions.

Hey, Scripting Guy! Question How can I find all Windows PowerShell functions easily?

Hey, Scripting Guy! Answer Use the Get-Command cmdlet, and specify the CommandType function:

Get-Command -CommandType function


Named Arguments for PowerShell Functions: Best Practices

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using named arguments in Windows PowerShell functions.

Microsoft Scripting Guy, Ed Wilson, is here. If I go to the trouble of writing a Windows PowerShell script, I generally do not use unnamed arguments (such as $args as I illustrated yesterday in Accepting Arguments for PowerShell Functions: Best Practices). Instead I create named arguments for my functions. It is just so much more powerful, and so much more flexible. Besides, I can still pass values appositionally in an unnamed fashion if I wish to do so.

Create a named argument in five easy steps

In yesterday's blog, I said that there are only three requirements to create a function in Windows PowerShell:

  1. The Function keyword
  2. The name of the function
  3. A script block

To create a named argument in a Windows PowerShell function, I need only two  additional things:

  1. The Param keyword
  2. A variable to hold the argument inside a pair of parentheses

The following script illustrates this technique:

Function myfunction

{

 Param($myargument)

 "This value of `$myargument is $myargument"

}

To use MyFunction, I first have to run the script. This loads the function into memory and makes it available via the function PSDrive. Because I have not saved the script containing the function, when I run the script, it appears in the Console pane below the Script pane. When the script runs, the Windows PowerShell prompt returns, and I can call the function by typing the name of the function. I then supply a value for the argument by typing it. This is shown in the image that follows.

Image of script

Keep in mind that tab expansion works here. So I do not have to type the entire name of MyFunction, nor do I need to type the complete name of MyArgument. In fact, I only had to type my and press the Tab key to get the MyFunctioncommand onto the command line. When I type the hyphen () for the named argument (parameter) a pop-up list appears, as shown in the following image.

Image of script

The advantage of using named arguments (parameters) is that I do not need to name the parameter if I do not want to. I can use it as a positional parameter. In this manner, it behaves like an unnamed argument ($args). This is shown here.

PS C:\> myfunction "this is a  string"

This value of $myargument is this is a  string

Because creating named parameters in Windows PowerShell is so easy, and because using the Paramkeyword is the entry into the world of advanced functions, I never use $args in a Windows PowerShell script. Because it is an automatic variable that becomes available in certain circumstances, using $args is more difficult to understand because nothing has been created in the script. It is just there.

On the other hand, because the Paramblock is declared and available for inspection, it makes sense, and is easier to understand. If I begin with a script that uses $args and I later decide that I need to add functionality, I will have to add a Paramblock to get access to advanced features.

Join me tomorrow when I will welcome guest bloggers Yuri Diogenes and Tom Shinder back with the second installment in their security series. If you want to refresh your memory, check out their first installment in the series:

Security Series: Using PowerShell to Protect Your Private Cloud Infrastructure

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Microsoft Script Explorer: Next Steps

$
0
0

Summary: The next steps Microsoft Script Explorer are revealed.

For those who are familiar with Microsoft Script Explorer for Windows PowerShell, you know that we haven't released additional updates since we published the release candidate (RC) in August 2012. Over the past few months, we have been talking with customers and partners and taking a hard look at the adoption rate of the RC in terms of the number of downloads and the feedback we've received.

One of the results of this analysis was that the adoption and usage of the pre-release versions of Script Explorer were not at the level we had hoped for. Part of this stems from the fact that customers already have a number of options in the market for discovering and sharing Windows PowerShell scripts, and most customers appear content to continue using these existing mechanisms. As a result, we've decided not to bring Script Explorer to RTM.

In the meantime, we will start winding down the Script Explorer project. This will be a gradual process to allow time for existing users to move to other tools. We'll start by removing the RC package from the Download Center this week.

For those who have already downloaded pre-release versions and are actively using Script Explorer in their environments, we will continue to operate the back-end script aggregation service that is used by Script Explorer for a few more months. We plan to turn off the service on June 14, 2013.

To help with migrating from Script Explorer to other solutions, we would like to highlight a few of the existing options that you might find useful for discovering and sharing Windows PowerShell scripts:

Microsoft Script Center

PowerShell Code Repository (PoshCode)

PowerShell.com

PowerShell.org

PowerShellCommunity.org

PowerShell Plus and PowerShell Scripts by Idera

PowerGUI by Quest

PowerShell Studio by SAPIEN

Microsoft Developer Network

CodePlex

Bing

There are lots of other options. Feel free to add your favorites in the following Comment box.

Thanks,

The Windows PowerShell Team

Security Series: Using PowerShell to Protect Your Private Cloud Infrastructure - Part 2

$
0
0

Summary: Microsoft senior technical writer, Yuri Diogenes, and knowledge engineer, Tom Shinder, talk about using Windows PowerShell to protect a Windows Server 2012-based cloud infrastructure. 

Microsoft Scripting Guy, Ed Wilson, is here. Guest bloggers Tom Shinder and Yuri Diogenes are back with Part 2 of their series about security. This series includes three blogs where the authors describe examples about how you can leverage Windows PowerShell to automate tasks.

Take it away, guys…

In our first blog of this series, Using PowerShell to Protect Your Private Cloud Infrastructure, we defined the essential characteristics of cloud computing, briefly discussed some cloud security challenges, and started exploring network protection by using platform capabilities in Windows Server 2012. The first network security scenario we described covered protection against eavesdropping attacks by leveraging the SMB Encryption feature available in Windows Server 2012. In this blog, we will discuss how to protect a private cloud infrastructure against rogue services.

Scenario 2 – Protecting against rogue services

As described in the paper Leveraging Windows Server 2012 Capabilities to Address Private Cloud Security Concerns – Part 2, you need to be concerned about protecting tenants against rogue services that could interrupt their workload. Because network isolation is a fundamental requirement for a private cloud, many times this concern is already mitigated for cross-tenant communication; however, traffic within the same tenant network is still vulnerable to this type of attack if nothing is done.

Traffic that belongs to the same tenant network can initially be viewed as “trusted”; however, as a security professional you should never trust any traffic, even if it comes from the your own corporate network. The risk that an internal threat will provision a service that will affect the entire tenant’s network is something that should be addressed from the private cloud infrastructure perspective. One classic example of potential network disruption is a rogue DHCP server that sends leases for bogus IP addresses on the tenant’s network.

One great new feature in Windows Server 2012 that can be used to mitigate this scenario is DHCP Guard. The Hyper-V Virtual Switch can use this feature to block DHCP offers from virtual machines that were not authorized to distribute IP addresses.

Note This is a feature of the Hyper-V Virtual Switch, so don’t get confused when we say “authorized.” This is not the same as DHCP Authorization in Active Directory. DHCP Guard does not need Active Directory, and it is implemented on the virtual switch level.

To demonstrate this scenario, we will use the configuration that follows.

Image of flow chart

Scenario definition: Contoso, Ltd. has a cloud infrastructure, and as part of the security policy for their private cloud they need to prevent rogue DHCP servers from distributing IP addresses on their network.

Scenario constraint: The plan is to implement this capability throughout the entire cloud infrastructure, but for now they are going to deploy only on the Financial Tenant virtual machine (the Red virtual machine in the previous image).

To use DHCP Guard, you need to select the virtual machines that must have this feature enabled. You can use the Get-VM command to visualize all your virtual machiness as shown here:

Image of command output

Tip   In a multitenant environment, you might want to use a name suffix that can identify the tenant. For example, “[Finance] Windows 8”. This will help you query the virtual machine’s name by using the command Get-VM *finance*.

It is important to note that the DHCP Guard feature is disabled by default, which means that every virtual machine you create will have this functionality disabled. You can use the command Get-VMNetworkAdapter to obtain this information shown here:

Image of command output

You can retrieve information about only this setting by using Get-VMNetworkAdapter –VMName RogueDHCP | Format-List DHCPGuard.The result of this command is shown here.

Image of command output

In a private cloud infrastructure, you (as an administrator) should be aware of which virtual machines will be provisioned to be a DHCP server (which in almost all cases is likely to be none of them because tenants should not be providing infrastructure services). Therefore, the correct assumption is that all virtual machines should have this feature on except the ones that will act as a DHCP server.

Unfortunately, it is not possible to change the default setting in Hyper-V to force all virtual machines that you create to have DHCP Guard enabled. You need to create the virtual machine and then use the following command to enable it.

Image of command output

In this blog of a three-part series, we discussed a scenario where the private cloud administrator wants to protect his cloud infrastructure against rogue DHCP attacks. The next blog in this series will discuss protection against MAC Spoofing and Router Advertisement attacks.

See you next time!

~Yuri Diogenes, senior technical writer, SCD iX Solutions Group
Twitter: @YuriDiogenes

~Tom Shinder, knowledge engineer, SCD iX Solutions Group
Twitter: @TomShinder

Photo of book cover


Thank you, Tom and Yuri. Join us again in two weeks for the final installment in a most excellent series.

Tomorrow I have a guest blog from Bob Stevens, who will share a script that pulls comments from Windows PowerShell scripts.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Determine the Status of Your DHCP Server Audit Log

$
0
0

Summary: Use Windows PowerShell to determine the status of your DHCP server audit log.

Hey, Scripting Guy! Question How can I use Windows PowerShell to determine the status of my DHCP server audit log in Windows Server 2012?

Hey, Scripting Guy! Answer Use the Get-DHCPServerAuditLog cmdlet and specify the server name:

Get-DhcpServerAuditLog -ComputerName DHCP1

 

Weekend Scripter: Pick Comments from a PowerShell Script

$
0
0

Summary: Guest blogger, Bob Stevens, shares a script to pick out comments from a Windows PowerShell script.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a new guest blogger, Bob Stevens. I made Bob’s virtual acquaintance recently when I did a Live Meeting presentation to the Twin Cities PowerShell User Group.

Here is Bob’s contact information:

Blog: Help! I’m Stuck in My Powershell!
Twitter: @B_stevens6
LinkedIn: Robert Stevens

The floor is yours, Bob…

As a local Help Desk technician, I run into many repetitive support tasks. From low toners to Internet Explorer issues, I have done it all. About three months ago I discovered that I could use PowerShell to automate a number of these tasks, thereby freeing my time for some of the more unusual issues. Fortunately, all of the computers at my site have Windows PowerShell 2.0 installed, so it was a matter of ensuring that they can run scripts. This can be done by changing the Set-ExecutionPolicy cmdlet to Unrestricted in the following manner:

Set-ExecutionPolicy Unrestricted

When the work is done, switch it back to the organization default with:

Set-ExecutionPolicy Remote-Signed

Set-ExecutionPolicy governs which scripts can be run on any given system. By setting it as Unrestricted, I am removing all restrictions. I switch it back to Remote-Signed to prevent users from running scripts that can potentially damage a system, thereby presenting a potential for data loss.

Fast forward three months and multiple scripts later…

It dawned on me that I would need to create documentation for each of these scripts. Documentation provides my coworkers with the insight they need to understand the purpose and functionality of my work and to pick up where I left off should the script need to be altered. Thankfully, like a good Windows PowerShell scripter, I commented liberally throughout my scripts to ensure that I knew where I would need to alter it in the future. To this end I started working on a script to pull comments. I knew that I habitually use single line comments for documentation and block comments for commenting out blocks of code. Happy accident because the script I devised only pulls the first and the last lines of a block comment.

Note   For simplicity, I name all directories a variation of “foo” and all input files a variation of “foo.*”. You can set foo as whatever you want.

As usual I started with setting my location:

Set-Location “C:\foo”

Next we set our source script file as a variable. I use variables in my scripts to enable me to change one line of code, rather than five lines, thereby reducing the chance of an error. Variables are defined by the dollar sign ($).

$script = “foo.ps1”

We need to define an output variable as $out. Note that I used the $script variable in the variable value. This will result in the file name of foo.ps1 comments.txt. This is to differentiate between output files. For this to work, there must be a space between $script and comments.txt:

$out = “$script comments.txt”

Now that we have defined the output file name, we need to create the output file itself, and I do this with the New-Item cmdlet. Of course, we need to define the object as a file—otherwise we get a dialog box asking us if it’s a file or folder:

New-Item “out” -ItemType File

Now our preparation is complete:

Set-Location "c:\foo"

$script = "foo.ps1"

$out = "$script comments.txt"

New-Item "$out" -ItemType File

We need to pull the content of our source file with the Get-Content cmdlet:

Get-Content $script

The next line is to prevent Windows PowerShell from appending the output to output that already exists in the variable by creating an array, this is done with the array (@) operator, followed by both parentheses:

$comments = @()

This is where the magic takes place. We need to use the Select-String cmdlet. This command requires two parameters: Pattern and File. Both values need to be quoted if you are not using a variable to define it. As we are searching for comments, we are going to select strings that contain “#”:

Select-String -Pattern “#” $script

This is a bit more complex, and it requires stringing three commands together, so we use the pipe (|) operator. The pipe operator merely states do “this” and then do “that” with the output of “this.” Pipe operators must always have a space preceding and succeeding it:

Select-String -Pattern “#” $script |

For our purposes, we are going to say, “For each object, do this.” Coincidentally (or not), Microsoft decided to add a Foreach-Object cmdlet for just this purpose! And everything that you are doing with Foreach-Object must be in braces to group them together:

Select-String - Pattern “#” $script |

Foreach-Object {}

When we string commands like this together, formatting is important—not for functionality, but for readability. Now we need to comment the full line where “#” appears. It is important to note that you must use “+=”, or you will end up with an empty file:

Select-String - Pattern “#” $script |

Foreach-Object {

$comments += $_.line

}

We need to tell Windows PowerShell to grab everything after “#” in that string. This is tricky, but it can be done with the context.postcontext definition:

Select-String - Pattern “#” $script |

Foreach-Object {

$comments += $_.line

$comments += $_.context.postcontext

}

We now need to tell Windows PowerShell to extract everything that we just defined within the Foreach-Object curly brackets:

Select-String - Pattern “#” $script |

Foreach-Object {

$comments += $_.line

$comments += $_.context.postcontext

}

$comments

Finally, we dump our output into the output file $out with the Set-Content cmdlet:

Select-String - Pattern “#” $script |

Foreach-Object {

$comments += $_.line

$comments += $_.context.postcontext

}

$comments | Set-Content $out

The complete script looks something like this:

Set-Location "c:\foo"

$script = "foo.ps1"

$out = "$script comments.txt"

New-Item "$out" -ItemType File

Get-Content $script

$comments = @()

Select-String -Pattern "#" $script |

Foreach-Object {

$comments += $_.line

$comments += $_.context.postcontext

}

$comments | Set-Content $out

The result should take the following input:

Image of script

And give you the following output (Select-String -pattern “#” shows up because it contains a “#”):

Image of command output

This saved me an hour now and countless future hours that I would spend extracting comments for documentation.

When I was verifying my work, I ran across the following note to myself stating that I referenced a Rob Campbell in another script. Some of the code made it in here.

Rob Campbell, Mjolinor: How do you extract data from a txt file with powershell. As always, input is always appreciated.

Thank you for your time. I uploaded the complete script to the Script Center Repository: Extracting Comments from a Script with PowerShell.

~Bob

Thank you, Bob, for sharing your script and your insight with us today. Join us tomorrow when Bob talks about a script he wrote to clean up user profiles. It is cool, and you do not want to miss it.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: See All PowerShell Script Execution Policies

$
0
0

Summary: Find the settings for all five script execution policy scopes. 

Hey, Scripting Guy! Question How can I use Windows PowerShell to see all of the script execution policies that affect the current Windows PowerShell session?

           Hey, Scripting Guy! Answer Use the Get-ExecutionPolicy cmdlet with the –List parameter:

                             Get-ExecutionPolicy -List

Weekend Scripter: Use PowerShell to Clean Out Temp Folders

$
0
0

Summary: Guest blogger, Bob Stevens, talks about using Windows PowerShell to clean out temporary folders on desktops following a malware infection.

Microsoft Scripting Guy, Ed Wilson, is here. Today, we welcome back our newest guest blogger, Bob Stevens. Yesterday Bob wrote about a quick script that he developed to pick out comments from a Windows PowerShell script: Weekend Scripter: Pick Comments from a PowerShell Script.

I made Bob’s virtual acquaintance recently when I did a Live Meeting presentation to the Twin Cities PowerShell User Group.

Here is Bob’s contact information:

Blog: Help! I’m Stuck In My Powershell!
Twitter: @B_stevens6
LinkedIn: Robert Stevens

Take it away, Bob…

For a local service desk systems analyst, nothing is more frustrating than malware. Not only is it a time sink—it also has the indented potential to cause irreparable damage. No network that connects to the Internet is immune to it.

Most organizations have their own standard operating procedures regarding malware removal. Even so, individuals technician have their special tweaks and tricks to increase the likelihood of success. I like to target the malware where it resides: Temp Folders. And after cleaning and clearing a number of workstations, it occurred to me that I could use a Windows PowerShell script to do just that, saving myself five minutes of hoping that the computer will let me open a folder.

I started by creating a list of the locations that temporary files are automatically placed by the Windows XP operating system (starting with Windows Vista, they are in the C:\Users folder):

  • C:\Windows\Temp
  • C:\Windows\Prefetch
  • C:\Documents and Settings\*\Local Settings\Temp
  • C:\Users\*\Appdata\Local\Temp

Now that I have defined our locations, I need to define what I want to do. For this, I create a flowchart:

Image of flowchart

I start with the Set-Location command and define the location as “C:\Windows\Temp”:

            Set-Location “C:\Windows\Temp”

Now that I am located in the Windows temp folder, I need to delete the files. This can be done with the old DOS command Del, but I prefer using the Windows Powershell cmdlet Remove-Item to standardize the script. The items need to be removed indiscriminately, so I use a wildcard character. A wildcard character is a special character that represents one or more other characters. The question mark (?) wildcard stands for one character and the asterisk (*) wildcard stands for any number of characters. Because I do not want to discriminate between different files, I use the asterisk.

Remove-Item *

Next I tell the Remove-Item cmdlet to also remove all files in subdirectories with the -recurse switch:

Remove-Item * -recurse

And I tell it to select hidden files with the -force switch:

Remove-Item * -recurse -force

Together the two lines looked like this:

Set-Location “C:\Windows\Temp”
Remove-Item * -recurse -force

I do the same for the rest of the folders and the complete script begins to take shape:

Set-Location “C:\Windows\Temp”
Remove-Item * -recurse -force

Set-Location “C:\Windows\Prefetch”
Remove-Item * -recurse -force

Set-Location “C:\Documents and Settings”
Remove-Item “.\*\Local Settings\temp\*” -recurse -force

Set-Location “C:\Users”
Remove-Item “.\*\Appdata\Local\Temp\*” -recurse -force

Wait. Why is there an asterisk in the middle of the last path? You can use wildcard characters to do this:

Remove-Item “.\*\Appdata\Local\Temp\*” -recurse -force

This says, “Look in all folders in this directory with the path structure that matches this.” In my case, this is all of the user profile Local Settings\temp folders.

But this looks very busy, so at Ed Wilson’s suggestion, an array would prevent all the unnecessary jumping around with the Set-Location command. So we change our flowchart to look something like this:

Image of flowchart

Arrays are a nifty programming feature that groups a number of strings together into one variable, while remaining individual strings. They are defined much like a normal variable—they start with the variable ($) indicator followed by the array name:

$tempfolders

Just like a variable, I use the equal sign (=) to define it.

$tempfolders =

Here is where the arrays and variables differ when defining. I start with the array indicator (@):

$tempfolders = @

And I follow it with parentheses to group strings together:

$tempfolders = @()

What you put inside the parentheses is your choice. For my purposes, I fill it with temp folder paths:

$tempfolders = @( "C:\Windows\Temp\*", "C:\Windows\Prefetch\*", "C:\Documents and Settings\*\Local Settings\temp\*", "C:\Users\*\Appdata\Local\Temp\*" )

Notice that each string is neatly encapsulated by double quotation marks (“ ”) and separated by a comma and a space (, ). The quotation marks are necessary for any string with a space in it, and the comma with a space separates the values. Both are essential to define an array. Additionally you can see that each string ends with a wildcard character. This is going to remove the necessity for me to define exactly what to remove in the next line.

Now I use the Remove-Item cmdlet again—this time for the -path operator. And I use the $tempfolders array variable:

Remove-Item $tempfolders -temp -recurse

This line instructs Windows Powershell to do exactly the same as previously, but for every item in the array.

Side-by-side, here are the two versions of the script:

1 Set-Location “C:\Windows\Temp”

2 Remove-Item * -recurse -force

3 Set-Location

4 “C:\Windows\Prefetch”

5 Remove-Item * -recurse -force

6 Set-Location “C:\Documents and Settings”

7 Remove-Item “.\*\Local

8 Settings\temp\*” -recurse -force

9 Set-Location “C:\Users”

10 Remove-Item “.\*\Appdata\Local\Temp\*” -recurse -force

 

1 $tempfolders = @("C:\Windows\Temp\*", "C:\Windows\Prefetch\*", "C:\Documents and Settings\*\Local Settings\temp\*", "C:\Users\*\Appdata\Local\Temp\*")

2 Remove-Item $tempfolders -temp -recurse

 

With two lines of code, I was able to save myself between three minutes and 30 minutes of work. This is the purpose of scripting at its finest: automate repetitive tasks to allow the technician to do more in-depth work. Thank you all for reading, and as always, let me know if you have developed a better way!

~Bob

Bob, thanks again for a real world example and a great suggestion. Join us tomorrow for a blog post by Boe Prox about installing WSUS on Windows Server 2012. It is a great blog and I am sure you will enjoy it.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


PowerTip: Use PowerShell to Find the Temp Folder Path

$
0
0

Summary: Find the path to the temporary folder.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find the path to the temporary folder?

Hey, Scripting Guy! AnswerUse the Temp variable, and obtain its value from the Env: PS drive.

$env:TEMP 

Installing WSUS on Windows Server 2012

$
0
0

Summary: Honorary Scripting Guy, Boe Prox, talks about installing WSUS on Windows Server 2012 via Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back today to Honorary Scripting Guy, Boe Prox. Without further ado, here is Boe…

In a previous Hey, Scripting Guy! Blog post, Introduction to WSUS and PowerShell, I demonstrated how you can download the Windows Server Update Services (WSUS) installation file and use various parameters to customize a WSUS installation on a local or remote system. With Windows Server 2012, we get a new version of WSUS that can be installed through Server Manager and also by…you guessed it…Windows PowerShell!

We can do the installation by using the Install-WindowsFeature cmdlet and specifying the proper feature names that you would like to have installed. In the case of WSUS, we are looking only at the update services feature. I will also take a look at some of the new cmdlets that are available in the UpdateServices module, which is available in Windows Server 2012 to help configure the WSUS server.

Let’s take a look at the possible subfeatures that are available by using the Get-WindowsFeature cmdlet and specifying UpdateServices* for the Name parameter.

Get-WindowsFeature –Name UpdateServices*

Image of command output

You could try to install everything, but that will end badly for you because there will be a conflict between using the Windows Internal Database (WID) and using another SQL Server database (local or remote) to store the SUSDB database on.

Image of error message

So which one should I choose? That depends on your environment and what your requirements are to support patching in your enterprise. Luckily for you, I will show examples of using the WID database or using another SQL Server database as part of the WSUS installation. After the installation, I will show one last thing you can do as a post installation that will allow you to specify a new location for the software update files and where the database is located.

Let’s start with the easiest, which is the WID database. I say easiest because it really comes down to just running a single line of code to get everything installed. Just to see what might be installed, I can use the WhatIf switch first.

Install-WindowsFeature -Name UpdateServices -IncludeManagementTools –WhatIf

Image of command output

From the looks of it, not only will WSUS be installed, but we also will be installing the WID database and some IIS components that are used for client check-ins and other things. I am confident that this is what I need, so let’s remove WhatIf and let it run again.

Image of command output

Wait for a bit…

Image of command output

And we are now finished with the installation. As you can see, we have a message stating some additional configuration may be required before our WSUS server can be up and running. In this case, we still need to configure a location for the update files to be stored.

This is where wsusutil.exe will come into play. This executable is located at C:\Program Files\Update Services\Tools. Besides the usual parameters that you can use with this utility, there is another set of parameters that become available when you use the PostInstall argument.

.\wsusutil.exe postinstall /?

Image of command output

We have parameters for specifying where to store the content and for where to build the database, if needed. Note that this can be used to specify a database that is local or remote (you will see this used on a remote system later).

Before I say where I want the content, I had better create a folder to store it. I don’t want all of this on my system drive, so I will create the folder on my D: drive.

New-Item -Path D: -Name WSUS -ItemType Directory

Now I can run the following command to configure my content directory to download and save all of the update files to D:\WSUS.

.\wsusutil.exe postinstall CONTENT_DIR=D:\WSUS

Image of command output

And with that, we have now configured the content directory on another drive to save the update files. Very simple…and a big recommendation to do it regardless!

Specify an alternate SQL Server

What if I want to specify a different SQL Server for saving the data instead of relying on a WID instance locally? Fortunately for us (and as you saw previously), this is an available option by using WSUSUtil.exe PostInstall. Going back to the beginning, I will perform a different installation of WSUS, this time specifying that I want a different SQL Server database.

Install-WindowsFeature -Name UpdateServices-Services,UpdateServices-DB -IncludeManagementTools

Image of command output

Now we need to use WSUSUtil again to not only specify the Content directory, but also to specify the SQL Server database that I want to use for my WSUS server. The WSUS server must be on a domain for the remote SQL Server database build to work. If it isn’t, you will get a message that states the host is unknown.

.\wsusutil.exe postinstall SQL_INSTANCE_NAME="DC1\SQL2008" CONTENT_DIR=D:\WSUS

Image of command output

And after a few minutes, the configuration has completed with the Content directory on D:\WSUS and the SUSDB database configured on my remote SQL Server.

Image of menu

Inspect the WSUS Installation State

By using my current installation of WSUS and the remote SQL Server database, we can now check the Best Practices Analyzer to see if anything else is required before we configure the WSUS server and kick off synchronization get all of the update metadata.

Invoke-BpaModel -ModelId Microsoft/Windows/UpdateServices

Image of command output

Now let’s see the results of our scan…

Get-BpaResult -ModelId Microsoft/Windows/UpdateServices |

Select Title,Severity,Compliance | Format-List

Image of command output

With the exception that I haven’t configured the WSUS server to use a required language pack (English in my case), everything else is compliant. Now it is time to finish configuring the WSUS server and get this synchronization kicked off.

First I’ll configure the languages and tell my server where I want to synchronize. In this case I want to sync up with Microsoft Updates. After I do that, I will perform an initial synchronization to pull down all of the available categories, classifications, and possible updates that can be approved.

Note   For more information about some of the configuration properties that are set in the following code, see IUpdateServerConfiguration Properties on MSDN.

#Get WSUS Server Object

$wsus = Get-WSUSServer

#Connect to WSUS server configuration

$wsusConfig = $wsus.GetConfiguration()

 

#Set to download updates from Microsoft Updates

Set-WsusServerSynchronization –SyncFromMU

 

#Set Update Languages to English and save configuration settings

$wsusConfig.AllUpdateLanguagesEnabled = $false           

$wsusConfig.SetEnabledUpdateLanguages("en")           

$wsusConfig.Save()

 

#Get WSUS Subscription and perform initial synchronization to get latest categories

$subscription = $wsus.GetSubscription()

$subscription.StartSynchronizationForCategoryOnly()

 

While ($subscription.GetSynchronizationStatus() -ne 'NotProcessing') {

    Write-Host "." -NoNewline

    Start-Sleep -Seconds 5

}

Write-Host "Sync is done."

You may have noticed that I didn’t have to run Import-Module before using the UpdateServices module cmdlets. This is because Windows PowerShell 3.0 supports automatic loading of the modules when a specific cmdlet from a module is used.

Now that we have pulled down the classifications and platforms, it is time to filter the platforms for which I want updates and the classifications I want. Your preference for platforms and classifications will vary based on your environment and requirements.

When that is done, we will configure WSUS to synchronize once a day automatically at midnight, and kick off another synchronization to pull down the update metadata (not the actual update files) from the Microsoft Update server.

#Configure the Platforms that we want WSUS to receive updates

Get-WsusProduct | where-Object {

    $_.Product.Title -in (

    'CAPICOM',

    'Silverlight',

    'SQL Server 2008 R2',

    'SQL Server 2005',

    'SQL Server 2008',

    'Exchange Server 2010',

    'Windows Server 2003',

    'Windows Server 2008',

    'Windows Server 2008 R2')

} | Set-WsusProduct

 

#Configure the Classifications

Get-WsusClassification | Where-Object {

    $_.Classification.Title -in (

    'Update Rollups',

    'Security Updates',

    'Critical Updates',

    'Service Packs',

    'Updates')

} | Set-WsusClassification

 

#Configure Synchronizations

$subscription.SynchronizeAutomatically=$true

#Set synchronization scheduled for midnight each night

$subscription.SynchronizeAutomaticallyTimeOfDay= (New-TimeSpan -Hours 0)

$subscription.NumberOfSynchronizationsPerDay=1

$subscription.Save()

 

#Kick off a synchronization

$subscription.StartSynchronization()

It may take a while to complete the synchronization. When it has completed, you can begin reviewing the available updates that your systems require. Whether you want to keep everything installed and configured on a single server or you want to keep your SUSDB database on a remote SQL Server, you will find that it is easily accomplished by using Windows PowerShell.

That is all for today’s blog about working with WSUS on Windows Server 2012. The next blog I have lined up will deal with using the UpdateServices module to configure the clients and approve or decline updates. In addition, we will dip our toes into WSUS API to set up Computer Groups for the clients and to remove clients from WSUS.

~Boe

Thank you, Boe, for a great blog post. Join us tomorrow for a guest blog by Honorary Scripting Guy and Windows PowerShell MVP, Don Jones.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Get a List of Installed BPA Models

$
0
0

Summary: Use Windows PowerShell to obtain a list of all installed Best Practice Analyzer models.

Hey, Scripting Guy! Question How can I get a list of all the Best Practice Analyzer models that are installed on my server?

Hey, Scripting Guy! Answer Use the Get-BPAModel cmdlet with no parameters:

Get-BPAModel

2013 Scripting Games Start April 22!

$
0
0

Summary: Announcing the Windows PowerShell 2013 Scripting Games, which begin April 22, 2013.

Microsoft Scripting Guy, Ed Wilson, is here. Don Jones is with us today from the offices of PowerShell.org. Tell us what you know about the 2013 Windows PowerShell Scripting Games, Don…

The Games are coming! With an all new software platform (kindly provided by Start-Automating.com, an organization that actually uses Windows PowerShell to power our website), we’re ready to roll. You’ll find Beginner and Advanced tracks, with some new twists.

Events and tracks

Events are open only for a limited period of time. Each event will go through four phases. Dates refer to midnight GMT of the indicated day (meaning, Feb. 2 would be “Feb. 2 at midnight GMT”). The four phases are:

  • Pending: The event is not yet open.
  • Open: You can download the event scenario (as a PDF for offline use) and submit entries. You usually get about five days to do this.
  • Review: No new entries are accepted, but everyone can vote on all events.
  • Public: No entries or votes are accepted, but anyone (logged on or not) can see the entries.

Remember, the Scripting Games are now managed by PowerShell.org, a community-owned corporation that runs the PowerShell.org website. The Games are still supported by Microsoft and The Scripting Guys, but we’re an independent organization and our actions and words do not represent Microsoft.

Here’s a bit about the tracks…

Beginner track   Consists of events where the answer is usually a one-liner, or at most, a couple of lines of code. We do not usually expect to see error handling or error suppression, extensive use of variables, and so on. We recognize that entries in the Beginner track may sometimes produce errors (like if the command can’t connect to a computer), and that’s fine. Judges will typically be less impressed with overcomplicated solutions, so keep it simple.

Advanced track   Consists of events where the answer is usually an advanced function with parameters. If you don’t know what an advanced function is, the Advanced track is not for you. We expect to see more attention to detail, and more use of built-in Windows PowerShell features.

Caution   We ask that you not include any personally identifiable information in your entries. This includes your name, email address, or other contact information.

There are several ways to win this year:

  • Every time you vote on someone’s entry (giving it a score of 1 to 5, with 1 as “bad” and 5 as “good”—whatever those terms mean to you personally), you earn one pointlet. Each pointlet serves as a prize raffle ticket.
  • You can win by being the crowd favorite! That simply means more people have given you high-scoring votes as part of your Crowd Score. These aren’t professional judges, but their opinion still matters.
  • Our professional judging panel will select their Best and Worst list for each event, and they will blog about what they liked and didn’t like. If you’re in the Best list for one or more judges, your entry will be reviewed by our mighty panel of celebrity judges, who will award First, Second, and Third places.
  • We’ll recognize the winners in each event and track, in addition to the overall winners for each track.

What does it take to impress the public and earn a high crowd score? We have no idea—it’s the public. Be creative and do the right thing.

What does it take to wind up on a judge’s Best list? Have a creative approach to the problem you’re given, and consider some of the guidelines in the next section of this guide.

Important note   “Win” does not mean “prize.” Not every recognized winner will receive a tangible prize (although we’re going to try). Every winner will have the right to use a badge on their PowerShell People profile, and we’ll announce those badges after the Games complete. (Oh, you don’t have a profile? Well, if you want to compete in the Games, there’s no better rehearsal than to write the script needed to set up your Powershell People profile!)

Prizes

We’d like to offer thanks in advance to our presenting sponsors, who are providing the majority of the prizes.

First prizes are awarded by our panel of celebrity judges. These judges will review the events that received the top community vote scores, but will use their own discretion for awarding the prizes. There are no fixed criteria for these prizes.

Note   We’ve got more prizes in the works…stay tuned to the Scripting Games site for news and announcements!

Overall winners across all events

First prize: Complimentary pass (admission only; no expenses are covered) to your choice of Microsoft TechEd North America 2013, TechEd Europe 2013, or TechEd NA 2014.

Second prize: SAPIEN Software Suite 2012 ($699 value) provided by SAPIEN Technologies

Third prize: Five ebooks (average value $200) provided by Manning Press

Event 6

First prize: PrimalScript 2012 ($349 value) provided by SAPIEN Technologies

Third prize: eBook (average value $40) provided by Manning Press

Event 5

First prize: PowerShell Studio 2012 ($349 value) provided by SAPIEN Technologies

Third prize: eBook (average value $40) provided by Manning Press

Event 4

Third prize: eBook (average value $40) provided by Manning Press

Event 3

Third prize: eBook (average value $40) provided by Manning Press

Event 2

Third prize: eBook (average value $40) provided by Manning Press

Event 1

Third prize: eBook (average value $40) provided by Manning Press

Crowd Favorite prizes

These prizes are awarded to the events with the top community vote score. We will award one prize for each event in each track.

Third prize (all events): ebook (average value $40) provided by Manning

Prizes for community voting

The top two community voters will receive a complimentary pass (admission only; no expenses are covered) to the PowerShell Summit North America 2014. “Top voters” will be identified by the quantity of votes (in either track) and by the quality (consistency, fairness) of their votes.

In addition, the following prizes will be raffled, with each vote that is cast acting as a raffle ticket:

  • Four $50.00 gift certificates to the SAPIEN Technologies online store, provided by SAPIEN Technologies
  • Twenty ebooks (average value $40) provided by Manning Press

Why you should vote

If you think you’re not qualified to vote on the entries…well actually, you are qualified. Just ask yourself, “Is this a script or command I’d want running in my production environment? Is this the work of a person I’d hire, if I had the opportunity? Did I learn something from this entry?”

Then vote with your heart. Everyoneis qualified. And if you can leave a brief comment about why you voted the way you did, even better. The votes are anonymous, as are entries during the voting period, so be polite and professional, and treat others as you’d want to be treated yourself.

And remember, every vote equates to a prize raffle ticket!

Oh…this should go without saying, but we’re gonna say it anyway: don’t be mean. We do have systems in place to watch for odd voting patterns, like handing out all 1s or all 5s, just to rack up pointlets. We also watch for sequence patterns and other signs of abuse. All of those things trip alarms. We also look into things manually. If we find wrong doing, you’ll be banned from the Games for life. Seriously. Oh, we’ll talk to you about it first, we’re not mean. But we absolutely won’t stand for this system being abused.

The future of the Games is in peer review and voting. Your opinion—the opinion of someone working in a production environment—is what’s important in the real world, not the opinion of some fancy-pants judge.  Part of the Games (the expert judge commentary, for example) will make you a better voter and judge in the future, and that’s how we’re going to help build a better overall Windows PowerShell community. So respect the vote.

We sure hope you’ll play along. Be sure to watch the feed on Scripting Games site for the latest news and announcements!

~Don

Thank you, Don, for this information. Stay tuned. Tomorrow we have the 2013 Scripting Games Competitors Guide.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Find the PowerShell Version

$
0
0

Summary: Easily find the installed version of Windows PowerShell.

Hey, Scripting Guy! Question How can I find the installed version of Windows PowerShell?

Hey, Scripting Guy! Answer There are two easy ways.

1. Use the automatic $PSVersionTable variable:

PS C:\> $PSVersionTable

Name                           Value

----                           -----

PSVersion                      3.0

WSManStackVersion              3.0

SerializationVersion           1.1.0.1

CLRVersion                     4.0.30319.18033

BuildVersion                   6.2.9200.16434

PSCompatibleVersions           {1.0, 2.0, 3.0}

PSRemotingProtocolVersion      2.2

2. Use the $host automatic variable and to select the Version property:

PS C:\> $Host.Version

 

Major  Minor  Build  Revision

-----  -----  -----  --------

3      0      -1     -1

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>