Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Increase Performance by Slowing Down Your PowerShell Script

$
0
0

Summary: Microsoft PFE, Georges Maheu, further optimizes the Windows PowerShell script he presented earlier this week.

Microsoft Scripting Guy, Ed Wilson, is here. Our guest blogger today is Georges Maheu. Georges presented a script three days ago to gather Windows services information in an Excel spreadsheet. Although the script did an adequate job for a small number of servers, it did not scale well for hundreds of servers. On day one, the script took 90 minutes to document 50 computers. On day two, it took less than three minutes. Day three went down to 43 seconds. Today, Georges wants to do even better! Here are quick links to his previous blogs:

Day 1: Beat the Auditors, Be One Step Ahead with PowerShell

Day 2: Speed Up Excel Automation with PowerShell

Day 3: Speed Up Excel Automation with PowerShell Jobs

Note: All of the files from today, in addition to files for the entire week are in a zip file in the Script Repository. You can read more from Georges on the PFE Blog: OpsVault.

Now, once again, here’s Georges...

There comes a time when there is no point to optimizing a script further. Moving from 90 minutes down to less than one minute was well worth the investment in time! But going from 43 seconds to 30 is not worth the effort for this script.

Today, we tackle typical issues that are encountered when collecting large volumes of information in a distributed environment. Excel can handle 1000 computers—that’s cool, but not very practical beyond those numbers.

There are a few options for dealing with large volumes of information: one could store all the information in a database and write nice reports by using queries. Personally, I could not create a decent database to save my life! Therefore, I will go to the next option, which is to use individual files for each computer.

Instead of writing the information in an Excel tab, the information will be exported to a file in .csv format by using the Export-CSV cmdlet.

First, the following lines are replaced:

$data = ($services `

    | Select-Object  $properties `

    | ConvertTo-Csv -Delimiter "`t" -NoTypeInformation ) -join "`r`n"

 

[Windows.Clipboard]::setText($data) | Out-Null

 

$computerSheet.range("a$mainHeaderRow").pasteSpecial(-4104) |

Out-Null #Const xlPasteAll = -4104

With these lines:

$services |

   Select-Object  $properties |

   Export-CSV "$script:customerDataPath\$($currentJobName).csv" `

              -Encoding ASCII -NoTypeInformation

Then, all the code that is related to Excel is removed from the script.

Most of the optimization techniques that I have shown so far can be reused, but there are a few additional challenges. One of them is, “What should I do if a computer does not respond?” With 50 computers, one can afford to run a 43 second script repeatedly until all the data is gathered. Not so obvious when 5000 computers or more are involved.

One could use the file system to keep track of computers already processed. If a CSV file exists, there is no need to collect the information again. If there is a need to update the information for a specific computer, simply delete that file and run the script again.

if (Test-Path "$customerDataPath\$($jobName).csv")

        {

        write-host "$jobName data has already been collected"

        }

else   

    {

    Start-Job -ScriptBlock `

                {

                param($computerName);

                Get-WmiObject `

                    -class win32_service `

                    -ComputerName $computerName

                } `

              -Name $jobName `

              -ArgumentList $computerName

    "Creating file for $computerName"

    } #if (Test-Path "$customerDataPath\$($jobName).csv")     

Yesterday, running out of resources was avoided with a rude implementation of a producer–consumer design pattern. This same issue needs to be revisited because the script design was based on the time it took the consumer to write the data in Excel. Writing to a file is much faster, and running out of resources could occur again.

This happens if there are too many jobs running at the same time. In fact, the more jobs that are running concurrently, the longer each ones takes to complete. This can be a vicious circle.

Image of performance data

Strangely enough, the performance of the script can be increased by slowing it down! I slow the script down a bit by using the Start-Sleep cmdlet.

if (@(get-job).count -gt 15)

    {

    Start-Sleep -Milliseconds 250

    write-host "Slow down!" -ForegroundColor yellow

    }

if (@(get-job).count -gt 5) {Get-CompletedJobs}

Slowing down the rate at which new jobs are started allows existing jobs to finish and be processed. This will help maintain the job queue size within the limits of the computer’s resources. For example, the following 500 computer test run took 8 minutes and 3 seconds after encountering a resource struggle.

Image of command output

After the lines were added to slow the script down, the same script completed in 3 minutes, 39 seconds.

The following screen capture shows these new lines in action after restarting a 5000 computer test run. The screen capture also shows the optimization mentioned earlier by using files as indicators that a computer has already been processed:

Image of command output

This is not an exact science, you will need to experiment with these numbers based on your computer resources and the time it takes to get and process the information.

This latest version of the script gets these numbers:

01 minute 19 seconds for 100 computers.

01 minutes 56 seconds for 250 computers.

03 minutes 39 seconds for 500 computers.

07 minutes 05 seconds for 1000 computers.

40 minutes 20 seconds for 5000 computers.

Not bad! However, there is one more thing to do to wrap up this project. All the services that run using nondefault service accounts need to be extracted.

Here is a second script that will process all those CSV files, extract that information, and store it in another CSV file.

clear-host

$startTime        = Get-Date

$scriptPath       = Split-Path -parent $myInvocation.myCommand.definition

$customerDataPath = "$scriptPath\Services Data"

$reportPath       = "$scriptPath\Services Report"

$CSVfiles = Get-Childitem $customerDataPath -filter *.csv

 

if ($CSVfiles.count -ge 1)

    {

    Write-Host "There are: $($CSVfiles.count) data files"

    }

else

    {

    Write-Host "There are no data files in $customerDataPath." -ForegroundColor red

    exit

    }

 

new-item -Path $scriptPath -name "Services Report" -force -type directory | Out-Null #Create report folder

 

$exceptions = @()

 

$CSVfiles |   

    ForEach-Object `

        {

        $services = Import-csv -Path $_.fullname

        "Processing $_"

        forEach ($service in $services)

            {

#            $service.startName

            ################################################

            # EXCEPTION SECTION

            # To be customized based on your criteria

            ################################################

            if (       $service.startName -notmatch "LocalService" `

                -and $service.startName -notmatch "Local Service" `

                -and $service.startName -notmatch "NetworkService" `

                -and $service.startName -notmatch "Network Service" `

                -and $service.startName -notmatch "LocalSystem" `

                -and $service.startName -notmatch "Local System")

                {

                Write-Host $service.startName -ForegroundColor yellow

                $exceptions += $service

                } #if ($service.startName

            } #foreach ($service in $services)

        } #ForEach-Object

 

$exceptions |  Export-CSV "$reportPath\Non Standard Service Accounts Report.csv" `

                          -Encoding ASCII -NoTypeInformation

 

$endTime     = get-date

 

"" #blank line

Write-Host "-------------------------------------------------" -ForegroundColor Green

Write-Host "Script started at:   $startTime"                   -ForegroundColor Green

Write-Host "Script completed at: $endTime"                     -ForegroundColor Green

Write-Host "Script took $($endTime - $startTime)"              -ForegroundColor Green

Write-Host "-------------------------------------------------" -ForegroundColor Green

"" #blank line

This script took just under 10 minutes to run. In summary, these two optimized scripts processed 5000 computers in about one hour. Not bad!

These scripts can be adapted with minor modifications to collect just about any kind of data. Today’s script focuses on large environments while yesterday’s script is appropriate for a small to mid-size environment. The next time your manager tells you that the auditors are coming, you can sit back and smile. You will be ready.

~ Georges

Thank you, Georges. This has been a great series of blogs. The zip file that you will find in the Script Repository has all the files and scripts from Georges this week.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Grab the 2012 Scripting Games Badge!

$
0
0

Image of Dr. Scripto

Please show your support for the 2012 Scripting Games by linking back to us! Copy the following code and paste it into your blog!

<a href="http://blogs.technet.com/heyscriptingguy/archive/tags/2012+Scripting+Games/default.aspx"><img alt="2012 Scripting Games" src=http://blogs.technet.com/resized-image.ashx/__size/150x0/__key/communityserver-blogs-components-weblogfiles/00-00-00-76-18/8203.hsg_2D00_2_2D00_4_2D00_12_2D00_1.png style="display: block; margin: 0px auto; border-width:0px" /></a>

 <p style="font-size: 80%; text-align: center; margin: 0px"><a href="http://blogs.technet.com/resized-image.ashx/__size/150x0/__key/communityserver-blogs-components-weblogfiles/00-00-00-76-18/8203.hsg_2D00_2_2D00_4_2D00_12_2D00_1.png" title="2012 Scripting Games--Grab this badge here!">Grab this badge here!</a></p>

The 2012 Windows PowerShell Scripting Games: All Links on One Page

$
0
0

Summary: The All Links on One Page for the 2012 Windows PowerShell Scripting Games is essential for monitoring the latest information about the games.

 Image of Dr. Scripto

Microsoft Scripting Guy, Ed Wilson, is here. The 2012 Windows PowerShell Scripting Games begin on April 2, 2012, and they run through April 13, 2012. The Scripting Games are the premier learning event of the year for IT pros, devs, and others who want to learn Windows PowerShell. Think you already know all there is about Windows PowerShell? Then register to compete in the Advanced category and compare yourself against some of the best Windows PowerShell people in the world. New to Windows PowerShell? You need to learn it, and learn it quick. Windows PowerShell is rapidly becoming the essential skill for IT pros, and all the new applications from Microsoft are absolutely burgeoning with Windows PowerShell features.

Just like last year, each day during the games, a new scenario will appear on the Hey, Scripting Guy! Blog. And just like last year, you will have exactly seven days (until 11:59 P.M. Pacific Standard Time) to submit your answer. An internationally recognized panel of judges (a veritable who’s who in the Windows PowerShell world) will grade your submissions. Daily leaderboards and random prize drawings will keep the interest in this exciting event to a near fever pitch for the duration of the games.

This page includes all the essential links for the 2012 Scripting Games. Add this page to your Favorites list, and check back on a daily basis. For that matter, add a tab to your browser, and make it one of your Home pages. As we progress through the games, I will be filling in the blanks.

Official Announcement

Registration

Events

  • First event appears April 2, 2012 at 1:00 A.M. Pacific Standard Time

Script Submission

  • Events begin on April 2, 2012, and they will appear on the Hey, Scripting Guy! Blog

Judges and Judging Criteria

Leaderboard

FAQ

Prizes

The Scripting Wife

  • The Scripting Wife will begin her prep work soon.

Community: Forum, Twitter, and Facebook

Our Sponsors and Official Rules

  • Support our sponsors
  • Official Rules of the 2012 Scripting Games

How to Prepare for the 2012 Scripting Games

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

2012 Scripting Games Study Guide: A Resource for Learning PowerShell

$
0
0

Summary: The 2012 Windows PowerShell Scripting Games Study Guide is a great resource that points to important sources for learning Windows PowerShell.

Image of Dr. Scripto

Microsoft Scripting Guy, Ed Wilson, is here. The 2012 Scripting Games happen April 2 – April 13, 2012, and they will test skills that are commonly required by IT pros in their day-to-day working activities. This year, you will have a choice to compete using Windows PowerShell 2.0 or the beta of Windows PowerShell 3.0. If you choose to use the beta, please ensure you are using the most current build available. All of the scenarios will work with a single computer, and they will not require access to server types of resources.

More important than competing online and receiving prizes and a certificate is the acquisition of new Windows PowerShell skills. The ten areas emphasized in this year’s games represent “bread and butter” type of knowledge that you will be able to use immediately. To this end, I hope that you will use this study guide, even if you do not participate in the games.

Note   Remember the Scripting with Windows PowerShell page in the Microsoft Script Center. It points to lots of great resources including podcasts, webcasts, and even a Windows PowerShell quiz.

Working with computer hardware

Not a day goes by that the IT pro must find out something about some computer somewhere. In general, working with hardware means using WMI. I have written extensively about using Windows PowerShell and WMI, and I even created a helper module to assist in exploring WMI, finding WMI classes, and other stuff like that. You can review these five pages of links to find topics that are of interest to you: Windows PowerShell and WMI

A good overview blog is the Use PowerShell to Simplify Access to WMI Data written by Microsoft PowerShell MVP, Richard Siddaway.

Another blog with good basic information is How Can I Use WMI with Windows PowerShell? 

Working with dates

Working with dates used to be a major pain in other scripting languages. But in Windows PowerShell, it is super simple. Never-the-less, I have written quite a few blogs about working with dates. The following list of blogs explore essential tasks such as formatting dates and working with culture settings: Using Windows PowerShell to work with dates.

A good basic introductory blog is Using Windows PowerShell to Work with Dates.

Another basic admin task is working with date ranges. There is a good Scripting Wife blog, Scripting Wife Uses PowerShell to Get Days until NCAA Final Four, which explains the basics of time spans.

Working with ETW logs

Event Tracing for Windows (ETW) logs are the primary way that the Windows operating system logs diagnostic and tracing information. Using Windows PowerShell to work with these logs opens new avenues for exploration for the harried network administrator or developer who is tracing problems with applications. I wrote a great introductory Weekend Scripter blog: Using PowerShell to Troubleshoot Windows.

For other Hey, Scripting Guy! Blogs about this topic, see:

Working with XML

I have written several Hey, Scripting Guy! blogs about XML. There are in fact, three very good introductory blogs:

Also refer to this list of Hey, Scripting Guy! blogs as great resources:
Windows PowerShell and XML blog posts

Some are more special purpose, such as Use PowerShell to Parse XML Exchange Audit Results, but the list is definitely worth a look.

Working with classic event logs

It should go without saying that event logs are important to network admins. But they are also important to devs and regular Windows users. There are two cmdlets to use with classic event logs: Get-EventLog and Get-WinEvent.

I have written a lot of blogs about event logs. You should definitely review this collection because there is some great information: Using PowerShell to work with event logs

Two great blogs to begin with are:

For blogs that are more specific, see:

Working with CSV files

Every time I write a new blog about using Windows PowerShell to work with CSV files, I am amazed at how easy it is to do. As it turns out, I have written quite a few blogs that provide very good information about this crucial topic: Windows PowerShell and CSV files

A great blog to begin with is Use PowerShell to Work with CSV Formatted Text.

Also check out these great blogs about working with CSV files:

Working with folders

Working with folders is foundational. It does not matter what one’s occupation is—nearly everyone needs to be able to create folders to organize files. Using Windows PowerShell to do this is choice. I have written quite a few blogs that talk about working with folders, as represented in this list: Working with folders

For information about using Windows PowerShell to find folder size, check out:

Working with text files

If folders are foundational, files are essential. I have written dozens of blogs about working with files. More specifically, I have also written blogs about text files.

At the most basic level, one needs to know how to take output from within Windows PowerShell and create a file from it. The Scripting Wife learned how to do that in The Scripting Wife Redirects Output and Creates a Text File.

Working with text files might also involve the tasks discussed in these blogs:

Working with services

Services make computers easy to use. They can also add complexity and open potential security issues. This is why it is important to know how to manage services. Luckily, Windows PowerShell makes working with services easy. I have written quite a few blogs about working with services (some of these blogs were also written by the community).

You will find blogs about:

The following blogs touch on two other important areas for working with services:

Working with processes

Everyone needs to know how to work with processes. Process management is an important topic, and as a result, there are quite a few blogs on the Hey, Scripting Guy! Blog that deal with it. You can learn more about working with processes in these blogs:

You might also need to know how to Use .NET Framework Classes to Explore Windows PowerShell Processes.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use the PowerShell ISE to Work with Long Commands

$
0
0

Summary: Learn how to use the Windows PowerShell ISE to work with long commands and make things easier to understand.

Hey, Scripting Guy! Question Hey, Scripting Guy! I don’t get the purpose of the Windows PowerShell ISE. I mean, we have the Windows PowerShell console, and most of the time, all I am doing is typing a few commands there. Why should I use the Windows PowerShell ISE when it takes at least three times as long to load on my computer as the Windows PowerShell console does?

—PG

Hey, Scripting Guy! AnswerHello PG,

Microsoft Scripting Guy, Ed Wilson, is here. Wow, it is a little more than a month until the Central Ohio PowerShell Users group hosts the first ever PowerShell Saturday event. The Scripting Wife and I are looking forward to this event. (In fact, the Scripting Wife did a lot of the organization for the event along with Wes Stahler, Ashley McGlone, and Brian Jackett.) This event will be awesome and several Windows PowerShell people are coming in from adjoining states to take part in this exclusive event. I will be presenting a day-long track that covers beginning with Windows PowerShell, and one of the things that I will talk about is the Windows PowerShell ISE.

PG, one of the things you might want to do is to read the Scripting Wife blog, The Scripting Wife Uses the Windows PowerShell ISE. It provides a fun background for my answer today.

I placed the Windows PowerShell ISE on my task bar, right next to the Windows PowerShell console; this makes it easy to use and to get to. If you do not want to do that, it is always possible to launch the Windows PowerShell ISE from inside the Windows PowerShell console by typing the command ise. This command, ise, is technically an alias. The executable name is powershell_ise.exe. The ise alias is resolved by using the Get-Alias cmdlet, as shown here.

PS C:\> Get-Alias ise | fl *

HelpUri             :

ResolvedCommandName : powershell_ise.exe

ReferencedCommand   : powershell_ise.exe

ResolvedCommand     : powershell_ise.exe

Definition          : powershell_ise.exe

Options             : ReadOnly, AllScope

Description         :

OutputType          : {System.String}

Name                : ise

CommandType         : Alias

Visibility          : Public

ModuleName          :

Module              :

Parameters          :

ParameterSets       :

This actually is a cool trick, and it is something that I have not talked about before. Every time I have created an alias, it has been for a Windows PowerShell cmdlet or a Windows PowerShell function. But I can also create an alias for an external executable. For example, I can create an alias called net for the command netsh.exe. Although it is not a huge time savings, it does illustrate the point. The commands shown here create a new alias named net. Use the net alias to enter the net shell, and then use the exit command to leave the net shell and return to the Windows PowerShell prompt.

PS C:\> New-Alias -Name net -Value netsh.exe

PS C:\> net

netsh>exit

 

PS C:\>

The upper pane of Windows PowerShell ISE is the script pane. Just because it is called a script pane does not mean you have to write a script. In fact, I often use it to organize Windows PowerShell commands because it lets me see a better picture of the flow than command continuation characters in the Windows PowerShell prompt. In the image that follows, I type a three-part command that retrieves process information, selects the name and process ID of each process, and displays the output in an automatically sized table.

Image of command output

Four things top keep in mind here are:

  1. Script execution does not have to be enabled. This is because no script is actually being read or executed.
  2. The commands are not saved. When Windows PowerShell is closed, a prompt appears asking if you want to save changes.
  3. When saved, the commands are considered a script; and therefore, the Script Execution policy must be modified to permit the running of scripts.
  4. The Windows PowerShell ISE does not have a transcript tool. Start-Transcript does not work in the Windows PowerShell ISE.

The image that follows illustrates the same three commands and their associated output as it appears in the Windows PowerShell console.

Image of command output

There are also several considerations when running commands in the Windows PowerShell console:

  1. The command editing capabilities in the Windows PowerShell console are not as strong as they are in the Windows PowerShell ISE. For example, there is no undo command.
  2. When the Windows PowerShell console closes, there is no prompt to save the command. All work is lost when the Windows PowerShell console closes.
  3. To record commands and the output from those commands use the Start-Transcript cmdlet.
  4. To create a Windows PowerShell script, redirect the commands to a text file with a .ps1 file extension.

In the Windows PowerShell ISE, when working in the script pane, you can type as many commands as required, and they do not execute until you press the F5 function key (ensure that function lock is not turned on on your keyboard), or you press the large green triangle under the Help menu heading.

One thing that I really like about the Windows PowerShell ISE is the ability to run only a portion of a command in the script pane. The ability to run only a portion of a command is useful not only from a troubleshooting perspective, but it is also useful when running demos whilst making presentations about Windows PowerShell. When highlighted, a selection of the script runs upon pressing the F8 function key or pressing the small green and white icon. This is shown in the image that follows.

Image of command output

PG, that is all there is to getting started with the Windows PowerShell ISE. Windows PowerShell ISE Week will continue tomorrow when I will continue the discussion.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Learn How to Use the Free PowerShell ISE to Edit Scripts

$
0
0

Summary: Learn about using the Windows PowerShell ISE to edit scripts, and to modify the script execution policy in this step-by-step blog.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a problem. I was following your blog yesterday about  typing long commands inside the Windows PowerShell ISE, and I saved my work because I did not want to lose what I was working on, and all of a sudden I could no longer do anything. Is this a bug?

—BW

Hey, Scripting Guy! AnswerHello BW,

Microsoft Scripting Guy, Ed Wilson, is here. This week I have been working diligently on the events for the 2012 Scripting Games. The Scripting Wife and I have also been working on the first ever PowerShell Saturday in Columbus, Ohio. At this event, I will be presenting a number of “beginning with Windows PowerShell” type of presentations. There will also be presentations about using Windows PowerShell with Active Directory, SharePoint, and even Exchange Server. There will be several special guest presentations from a world class grouping of speakers. If you are anywhere near Columbus, Ohio, you do not want to miss this event. Seating is limited, and available slots are going quickly.

Anyway, BW, one of the skills you will need to know about for the 2012 Scripting Games is how to use a script editor. Because the Windows PowerShell ISE comes free, you may as well learn how to use it. If you feel that it is limited, keep in mind that it can be extended. I have talked quite a bit about using the Windows PowerShell ISE object model to extend capabilities, and I have written a number of add-ins to assist me in working with modules, replacing aliases in scripts, adding command snip-its, and more.

BW, after you save the code in the Windows PowerShell ISE, it becomes a script. When this happens, you run afoul of the script execution policy on your computer. In the image that follows, a simple Windows PowerShell script (now saved) does not execute because the script execution policy does not permit running scripts.

Image of command output

I have discussed the Windows PowerShell script execution policy on several occasions. In one blog, I helped the Scripting Wife set the script execution policy on her computer. On another occasion, posted an excerpt from my best selling Windows PowerShell 2.0 Best Practices book titled Why Would I Even Want to Create a Profile in Windows PowerShell?

Now, the network administrator via Group Policy might set the script execution policy, but it might also not be set. By default, Windows PowerShell disables the execution of scripts. This means you can use Windows PowerShell to run commands, but you cannot run a script. To set the script execution policy for all Windows PowerShell hosts and for all users of the computer requires administrator rights. But a normal user can modify the execution policy for the current Windows PowerShell host and for the current user by specifying the scope of CurrentUser. The command to permit running scripts for the current user is shown here.  

Set-ExecutionPolicy -ExecutionPolicy remotesigned -Scope currentuser -Force

In the image that follows, the previous command runs to permit the execution of Windows PowerShell scripts. The change takes effect immediately, and it does not require a reboot or the closing and reopening the Windows PowerShell ISE.

Image of command output

It does not take too long before output seems to clutter the output pane. In general, when I am developing a script, I like to clear the output pane after each run of the script so I can easily identify any newly appearing error messages. There are two ways to clear the output pane. The first is to use the “windshield washer” icon that appears in the image that follows.

Image of toolbar

The other way to clear the output pane is to use the same command that works for the Windows PowerShell console—the Clear-Host command. The Clear-Host command has two aliases: clear and cls. I found these aliases by using the Get-Alias cmdlet as shown here.

PS C:\Users\ed> Get-Alias -Definition clear-host

 

CommandType     Name                                                Definition

-----------     ----                                                ----------

Alias           clear                                               Clear-Host

Alias           cls                                                 Clear-Host

To use the Clear-Host command (or the cls or clear alias for the command), type it in the command pane, as illustrated in the image that follows.

Image of command output

If I need to edit the script, there are Cut, Copy, and Paste commands available in the tool bar. Because the Windows PowerShell ISE is a standard Windows type of application, there are also the following shortcuts to assist you in quickly editing your script with the ISE.

Command

Keyboard shortcut

Copy

Ctrl + c

Cut

Ctrl + x

Paste

Ctrl + v

Undo

Ctrl + z

Redo

Ctrl + y

Find (in script)

Ctrl + f

Find Next (in script)

F3

Find Previous (in script)

Shift + F3

Replace (in script)

Ctrl + H

Go To Line

Ctrl + G (then type line number)

Select All

Ctrl + A

Not all of these commands are available as buttons on the tool bar. In fact, only the most common five commands appear on the tool bar. The remaining commands are available via the Edit menu (or you can use the keyboard shortcuts to avoid having to remove your hands from the keyboard to use the mouse). The five tool bar buttons are shown in the following image.

Image of tool bar

BW, that is all there is to using the Windows PowerShell ISE to edit scripts and to set the execution policy. ISE Week will continue tomorrow when I will talk about more cool things.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Learn Keyboard Tricks to Use the PowerShell ISE Easier

$
0
0

Summary: Learn how to use the Windows PowerShell ISE more effectively by leveraging keyboard shortcuts.

Hey, Scripting Guy! Question Hey, Scripting Guy! I am the network administrator for a large company, and we have to go through very rigorous testing before we can download and install software on our servers. This requirement has prevented me from using many of the community based script editors. I can use the Windows PowerShell ISE because it comes with Windows, and it is therefore considered part of the operating system. Unfortunately, it seems a bit cumbersome to use. Is there a way I can use some shortcuts to make things a bit easier to use? For example, I do not like having to use the mouse when I am working because it is a bit erratic.

—WS

Hey, Scripting Guy! AnswerHello WS,

Microsoft Scripting Guy, Ed Wilson, is here. This week I have been busy at work on the 2012 Scripting Games. I have written several events now, and I am pleased with the shape the events are taking. I think this year’s games are going to be a good challenge for both the beginner and advanced scripts. You should check out the 2012 Scripting Games Study Guide, and begin digging into the information that I provided about the ten areas of concentration for this year’s events. If you have time, you may also want to review study guides from the previous years. You’ll find them on the All Links on One Page. (You should book mark this page and keep it handy for the next several months, because I will be adding links to this page on a regular basis.)

WS, this is the third blog in a series about using the Windows PowerShell ISE. On Monday, I talked about how to Use the Windows PowerShell ISE to Work with Long Commands. In that blog, I contrasted some of the advantages (and disadvantages) of using the Windows PowerShell ISE as opposed to using the Windows PowerShell console. On Tuesday, I wrote Learn How to Use the Free PowerShell ISE to Edit Scripts. In that blog, I covered the script execution policy, clearing the output pane, and a number of keyboard shortcuts that make cutting, copying, and pasting code in the Windows PowerShell ISE easier to accomplish.

Today, I want to continue the discussion about keyboard shortcuts in the Windows PowerShell ISE. To create a new script pane (or to add a new tab to the Windows PowerShell ISE), click the paper icon on the tool bar. To open a script for editing, click the folder on the tool bar. To save the script, click the floppy disk icon on the tool bar. These icons are shown in the image that follows.

Image of tool bar

Not all of the capabilities are available via the tool bar. Other capabilities, such as opening a Windows PowerShell ISE tab that works against a remote machine, are available via the File menu. Here are a list of keyboard shortcuts that perform the same actions.

Command

Keyboard shortcut

New Script (pane)

Ctrl + N

Open Script (or other file)

Ctrl + O

Save

Ctrl + S

Save As

Alt +F + A

Run

F5

Run Selection

F8

New PowerShell Tab

Ctrl + T

Close PowerShell Tab

Ctrl + W

Close Script

Ctrl + F4

New Remote PowerShell Tab

Ctrl + Shift + R

One of the interesting things about the Windows PowerShell ISE is the use of multiple tabs, and multiple script panes. The easiest to understand are multiple script panes. Opening the Windows PowerShell ISE automatically opens a blank script pane. If I then open a script, it appears on a new script pane, and the original blank pane still appears. If I open three scripts, there will be four script panes: the original blank script pane, and the three panes that contain the three newly opened scripts. This scenario is shown in the image that follows.

Image of tool bar

One thing to keep in mind is the title of the untitled script pane changes based upon how many times it opens or closes. In the previous image, the untitled script pane is up to untitled11.ps1. This means 10 untitled scripts have opened and closed (perhaps they were saved as scripts, or simply discarded.)

A new Windows PowerShell tab is not quite as easy to understand, because at first glance it does not appear to work. A new Windows PowerShell tab is tied to the command pane and to the output pane, but not to the script pane. In fact, a new Windows PowerShell tab does not create a new script pane at all. In the following image, you can see that the PowerShell 1 tab contains the output that is associated with the Get-Process cmdlet.

Image of tool bar

In the image that follows, the PowerShell 6 tab appears. It contains the results of the Get-Service cmdlet.

Image of tool bar

Using multiple Windows PowerShell tabs is a great way to execute long running commands, and to see the output from multiple commands. In addition, it is an excellent way to work with multiple remote systems at the same time.

WS, that is all there is to using the command line with the Windows PowerShell ISE. Windows PowerShell ISE Week will continue tomorrow when I will talk about more cool things.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use Tab Expansion in the PowerShell ISE to Avoid Cmdlet Aliases

$
0
0

Summary: Learn how to use Tab Expansion in the Windows PowerShell ISE to avoid using cmdlet aliases and to add complete parameter names in commands.

Hey, Scripting Guy! QuestionHey, Scripting Guy! I spend a decent amount of time working with scripts, and I would like to learn how to use the Windows PowerShell ISE more effectively. I have heard that I should not use aliases in scripts, but it just takes too long to type out full cmdlet names—after all, some of the cmdlet names are ridiculously long and hard to type. I notice that you seem to use complete cmdlet names, do you have a secret, or are you just a really good typist?

—DJ

Hey, Scripting Guy! AnswerHello DJ,

Microsoft Scripting Guy, Ed Wilson, is here. Actually, DJ, I am a fairly good typist, I made an “A” in typing class when I was in school (a little slow with numbers, but other than that, I’m pretty good). I also wrote a really cool Windows PowerShell function to Modify the PowerShell ISE to Remove Aliases from Scripts. I load this function with my Windows PowerShell ISE Profile that replaces Windows PowerShell aliases with full cmdlet names. By using this function, I am able to write my code using aliases. I then call the function, and it cleans up my code and copies the modified code to a new Windows PowerShell ISE script pane. It does not modify the original code, so there is no worry if it were to make a mistake.         

Note   This is the fourth blog in a series about using the Windows PowerShell ISE.

For example, the following code works just fine; but it contains a number of aliases, and it is not a good practice to have these in a script.

gps |

select name, id -First 3 |

ft name, id –AutoSize

I run this code to ensure that it works. It does, and both the code and the output will appear in the output pane (because I have not saved this script yet). I then use the Remove-AliasFromScript function (loaded with my Windows PowerShell ISE profile) to remove all the aliases from the script.

The newly created script (without any aliases) appears in a new Windows PowerShell ISE script pane as untitled5.ps1. When I test it and decide I like the new script, I can save it with the name of my choosing. In the image that follows, you can see the original code (in the output pane), the output associated with the script, the Remove-AliasFromScript command, and the revised script in the new script pane.

Image of command output

Well, suppose that you do not want to use my Remove-AliasFromScript function. How do you avoid typing lots of long cmdlet names? The secret is to use the Windows PowerShell ISE Tab Expansion feature. Tab Expansion does not expand aliases, so if you type gps an alias for the Get-Process cmdlet, it will not resolve gps to Get-Process (that is what my Remove-AliasFromScript function does).

What Tab Expansion does is complete the cmdlet name when you type a portion of the name. One thing to keep in mind is that Tab Expansion is not a mind reader. So, if I type Get- and then I press the Tab key <tab>, Get-ACL appears in the ISE on my computer. If I press the Tab key again, Get-Alias appears. Each time I press the Tab key, the Windows PowerShell ISE cycles through the next cmdlet in order. So, if I wanted to use Get-Process that would be a lot of pressing of the Tab key.

The secret is to type just a little bit more of the cmdlet name prior to pressing the Tab key. On my computer, if I type Get-p and then I hit the Tab key, the first cmdlet that appears is Get-PfxCertificate. When I press the Tab key again, Get-Process appears, and I can now use the command. If I want to ensure that I go directly to the Get-Process cmdlet, I type Get-pr and then press the Tab key.

Note   Keep in mind that these examples for using Tab Expansion all depend on what modules, cmdlets, version of Windows, and roles are installed. When you have lots of cmdlets available, the usefulness of Tab Expansion begins to bog down.

Tab Expansion also works for parameters. One thing that my Remove-AliasFromScript function does not do is add in missing parameters to commands. The use of parameters is essential to learning how Windows PowerShell actually works. When I was first learning Windows PowerShell, there were many times when I thought that my pipelined input was supplied to a particular parameter, only to learn later that the pipelined input was going somewhere else. At other times, I would see a command fail, such as the one that is shown here, and I wondered why it did not work.

Stop-Process notepad

Later, I would discover a command that works, such as the one shown here.

Get-Process notepad | Stop-Process

And although the command works, it does nothing to help me learn why the one command works, and the other one does not work. Later, I figured out that the default parameters of Get-Process and Stop-Process are reversed. But if I use the parameter names, it does not matter. This is shown here.

Get-Process -Name notepad

Stop-Process -Id 3576

The command to start Notepad, get all instances of Notepad, and then stop a specific instance of Notepad is shown in the image that follows.

Image of command output

When you use the Windows PowerShell ISE, it is easy to add parameters to a command. After completing the cmdlet name (by using the Tab key or typing the cmdlet name), type a dash and press the Tab key. It will then populate the first parameter name. If this is the parameter you want, you can use it. However, if you do not want that parameter, press the Tab key again and again until you find the parameter you want.

If you find a parameter you want, but you accidently go past it, you do not have to continue to cycle through all of the available parameters. You can hold down the Shift key <shift> and press the Tab key, and it will cycle backwards. This allows you to come back to a parameter you may have missed in the cycle. This Shift + Tab technique also works when using Tab Expansion for cmdlet names.

DJ, that is all there is to using Tab Expansion in the Windows PowerShell ISE to assist in typing in complete cmdlet names and parameter names. This also concludes Windows PowerShell ISE Week. Join me tomorrow as I reveal the 2012 Scripting Games Frequently Asked Questions (FAQ).

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


2012 Scripting Games: Frequently Asked Questions

$
0
0

Summary: Refer to this list of frequently asked questions to learn about the 2012 Windows PowerShell Scripting Games.

Image of Scripting Games badge

Microsoft Scripting Guy, Ed Wilson, is here. This has been an exciting couple of weeks with the announcement of the 2012 Scripting Games and Windows PowerShell Saturday event in Columbus, Ohio. In Columbus, I will be presenting a day-long beginning Windows PowerShell session, and there will also be an advanced track. The Scripting Wife will be there also. It will be a cool event. It seems that IT pros realize that knowing how to use Windows PowerShell is rapidly becoming not just a nice-to-have skill, but an essential skill. This is where the 2012 Scripting Games can help. I designed this year’s games to test ten essential skills (at basic and advanced levels). Refer to this year’s Scripting Games Study Guide for reference information about these skills.

Here are some of the questions about the Games that I saw last year via Facebook, Twitter, and scripter@microsoft.com.

The Windows PowerShell Scripting Games sound really cool. How do I sign up?

Sign up is easy. Just go to the 2012 Scripting Games page on the PoshCode site. Create your user name, and add your email address. Keep in mind that your user name will appear on the leaderboard (so do not choose something that you would be embarrassed for your mother to see). The email address is used for notification of prizes from our daily drawings. Your user name and email address are also used at the end of the games to send the valuable 2012 Scripting Games certificates.

I just realized that I have 7 days to submit my script for an event. I would like to add some things to my script, but I do not see a way to recall or delete my entry.

You are right; there is not a recall or delete function. After a script is submitted, it cannot be changed, deleted, or recalled. Make sure that the script you submit is the script that you want to have graded.

I am getting tired of browsing to review the scripts submitted on PoshCode. Is there a way to search for scripts?

Yes. In the upper right corner next to Login/Profile, there is a text box. Type search terms there, and press ENTER.

I am not on the leaderboard, but I submitted my script. What’s up with that?

The leaderboard report runs at midnight Pacific Standard Time, and if you submit your script later than that, it will be reported on the next day’s leaderboard. Also, keep in mind that we are not listing anonymous submissions on the leaderboard.

I tried to upload my script, but it is not working. What’s going on?

There are bound to be a few challenges along the way. Keep trying. We update the status of the tool on Twitter and on the Scripting Guys Facebook site. If we do not have a note that says it is down, you should let us know about your issue by Twitter. The tag for the 2012 Scripting Games on Twitter is #2012sg, and we are filtering for that tag. If you do not include that tag, we will miss your tweet. The cool thing is that you can also filter on #2012sg and catch up with everything that is going on. You might also want to filter on @ScriptingGuys and #PowerShell.

I am unhappy with my standing in the Games. What can I do?

We suggest that you pay attention to the grading guidelines (you can always find them via the 2012 Scripting Games All Links on One Page.

Pay close attention to the scenario requirements and to the expected output. Not every event requires a 500 line script; many events only need a one-liner. You may want to review some of the common errors from last year’s games. Nevertheless, if you add additional things such as error handling, comments, and the like, you will gain additional points. Keep in mind that this year, you are only allowed to compete in one category—either Beginner or Advanced. Therefore, it is important to submit excellent scripts for each of the ten events.

You guys are posting stuff several times a day on your blog. How do I keep up with everything?

I sympathize with you! Last year, we published more than a hundred pages in conjunction with the 2011 Scripting Games. I anticipate nearly that many pages again this year. Use the 2012 Scripting Games tag as a filter on the Hey, Scripting Guy! Blog site. It will bring up everything. In addition, the 2012 Scripting Games All Links on One Page provides the essentials, such as links to each event and to Posh Code. However, if you only use that page, you might miss some of the other cool things.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Why Use .NET Framework Classes from Within PowerShell?

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about the need to use .NET Framework classes from within Windows PowerShell code.

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I had a great meeting with the Windows PowerShell Users Group in Charlotte, North Carolina. It was a script club format, so there was no set agenda, nor was there a formal presentation. This provided a great chance to talk to people, find out what they were working on, and in general, have a lot of fun. Speaking of a lot of fun, make sure you check out the first ever Windows PowerShell Saturday that will be held in Columbus Ohio on March 10, 2012. This event, limited to 100 persons, is nearly sold out. So you need to hurry if you want to take advantage of a unique opportunity to network with a great bunch of Windows PowerShell people. The Scripting Wife and I will be there, as will an all-star group of other Windows PowerShell luminaries.

One of the questions I had from a group member was about using the .NET Framework from within Windows PowerShell. I have written quite a bit about using .NET Framework classes from within Windows PowerShell. Those blogs cover working with methods, discovering properties, finding documentation, and other bread-and-butter types of issues.

One of the things that I have not talked much about is why one needs to use .NET Framework classes inside of Windows PowerShell. Keep in mind, that as a best practice, I recommend using a native Windows PowerShell cmdlet when it exists—unless there are compelling reasons for not doing so. For example, I have seen a number of Windows PowerShell scripts (for example, when I was grading the Scripting Games submissions for the last three years), where participants use .NET Framework classes when there is a perfectly good Windows PowerShell option available. Here are two equivalent commands:

[datetime]::now

Get-Date

In the image that follows, I run both commands, and you can see that the output is essentially the same. (That the time indicated is three seconds later is a feature of the fact that for some reason it took me three seconds to run the second command.)

I can use the GetType method to verify that both commands return a System.Datetime object. These two commands are shown here.

PS C:\> ([datetime]::now).gettype()

 

IsPublic IsSerial Name                                     BaseType

-------- -------- ----                                     --------

True     True     DateTime                                 System.ValueType

 

PS C:\> (Get-Date).gettype()

 

IsPublic IsSerial Name                                     BaseType

-------- -------- ----                                     --------

True     True     DateTime                                 System.ValueType

 

PS C:\>

Because both commands return a DateTime .NET Framework class object, there is no advantage to the first command. Some may ask, what does the first command actually do? The command that appears here calls the static DateTime.Now property from the System.DateTime .NET Framework class.

[datetime]::now

The static Now property returns a System.DateTime object that represents the current local date and time—this is the same thing that the Get-Date cmdlet does. The difference? Well, the command Get-Date is much easier to read than [datetime]::now. So why do people use the static Now property? Well, I am convinced there are two reasons.

The first reason, I feel is legitimate: .NET developers may not know that the Get-Date cmdlet exists, and they have learned that to call a static member, they put the class name in square brackets and use the double colon before the member name. As I said, this is completely legitimate. Windows PowerShell is flexible enough, that you can write Windows PowerShell code as if it were C#, VB.NET, or even as if it were VBScript or Perl. Anything that helps you get the job done is fine with me—after all, Windows PowerShell is simply a tool for the vast majority of network administrators.

The second reason is more insidious. I think there are some people who simply want to use .NET Framework classes because they think it is cool, and that it makes the code appear to be more complex. Maybe they are attempting to impress their coworkers or their boss. Maybe they think that if people see things like Get-Date in a Windows PowerShell script, they will realize how easy Windows PowerShell is to use and to learn, and then they will no longer have the mantle as the “PowerShell guru.” I am all for job security, but I prefer to ensure job security by helping others maximize their potential. I prefer to show people how easy it is to use Windows PowerShell to become more productive than to attempt to obscure that fact by deliberately writing confusing code.

What do you think? I would love to hear from you. I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use .NET Framework Classes to Augment PowerShell when Required

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell classes to augment the native functionality of Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. It seems like the conference season is gearing up. Keep an eye on my upcoming appearances page so hopefully you can catch the Scripting Wife and me at a venue near you. My good friend Aaron Nelson (aka SQLVariant) succeeded in getting us to come speak at the SQL Server Saturday event in Atlanta. He tried last year, but we were already booked to go somewhere else. Therefore, he immediately issued an invitation for this year, and that got it on our schedule. If you are anywhere in the area of Atlanta, you should make your reservation now so not miss this star-studded event. Even if you are a pure Windows PowerShell person who does not care for SQL Server, you should still attend because there will be some great Windows PowerShell sessions there.

I received several emails after yesterday’s blog, Why Use .NET Framework Classes from Within PowerShell? People were asking why I do not like using .NET Framework classes in scripts.

I did not mean to imply that there is no reason to use .NET Framework classes from within Windows PowerShell. What I meant to say was that given a choice between two equivalent commands, you should always use the native Windows PowerShell cmdlet, unless there is a sufficient reason for not using the Windows PowerShell command (such as performance or need for a specific capability).

For example, the following command returns an instance of a System.Diagnostics.Process class.

[diagnostics.process]::GetProcesses()

The output from the GetProcesses static method is exactly the same as the output from the Get-Process cmdlet. This is shown in the image that follows.

Image of command output

So, why use the GetProcesses static method from the System.Diagnostics.Process class? Well, in Windows PowerShell 1.0, the Get-Process cmdlet did not have a ComputerName parameter. Therefore, if one wanted to obtain process information from a remote computer, the options were to use the Win32_Process WMI class, or to use the GetProcesses static method. In this case, the need to retrieve process information from a remote computer clearly called for something other than use of the native Get-Process cmdlet.

In Windows PowerShell 2.0, however, this is no longer the case. Of course, if you have an old Windows PowerShell 1.0 script lying around that is working perfectly fine, there is no need to change it just because a new version of Windows PowerShell came out. We invest a lot of time and energy in backward compatibility to avoid the need to rewrite perfectly good Windows PowerShell scripts.

Another reason for using a .NET Framework class, is because it might be easier than using a Windows PowerShell cmdlet. For example, it is possible to create an arbitrary date by using the Month, Day, and Year parameters from the Get-Date cmdlet. This technique is shown here.

Get-Date -Day 23 -Month 1 -Year 2011

On the other hand, it is also possible to use the System.Datetime .NET Framework class to cast a string into a datetime object. This technique is shown here.

[datetime]"1/23/11"

The difference between the commands is that the Get-Date command creates a datetime object with the current time, and the [datetime] cast creates a datetime object with a time at midnight. These two techniques are shown in the following image.

Image of command output

It is, of course, possible to use a [datetime] cast. To do this, place a space after the date, and then type a time value. This technique is shown here.

PS C:\> [datetime]"1/23/11 3:17:42 pm"

Sunday, January 23, 2011 3:17:42 PM

If no timestamp is required, the [datetime] cast works great and is more efficient than using the Get-Date cmdlet. On the other hand, if the current time is also required, I prefer to use the Get-Date cmdlet because it seems easier to do.

On the other hand, it is possible to use the Get-Date cmdlet, and directly feed it a date to create. This technique is shown here.

PS C:\> get-date 1/1/11

Saturday, January 01, 2011 12:00:00 AM

So the question is, “What is easier?” When working with Windows PowerShell, it is great to have choices. Most of the time, especially when working interactively from the Windows PowerShell console, I try to use the technique that is easiest to use. If a .NET Framework class is easiest to use, or if it does exactly what you want, use it. In the end, it is all about getting the job done with a minimum of effort. Do not get stuck in a rut—use Windows PowerShell to its fullest. That is what it is there for.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Automate SCOM Agent Installations

$
0
0

Summary: Guest blogger, Boe Prox, shows how to use Windows PowerShell to automate SCOM agent installations.

Microsoft Scripting Guy, Ed Wilson, is here. Last month, guest blogger, Boe Prox, wrote a series of blogs about WSUS. Today he is back to talk about SCOM.

Photo of Boe Prox

Boe Prox is currently a senior systems administrator with BAE Systems. He has been in the IT industry since 2003, and he has been working with Windows PowerShell since 2009. Boe looks to script whatever he can, whenever he can. He is also a moderator on the Hey, Scripting Guy! Forum. Check out his current projects published on CodePlex: PoshWSUS and PoshPAIG.
Boe’s blog: Learn PowerShell | Achieve More

Here’s Boe…

Recently, I was asked to write a script to run against our new SCOM servers that would automate the SCOM agent installation for servers that are being joined to the domain and provide a report on those that successfully installed and also for those that failed to install. This way our SCOM administrators have a report of new systems that are managed by the agents, and they will also have a way to troubleshoot and, if required, manually install the agents on the systems that report failures.

Although I was not completely familiar with SCOM like the SCOM admins are, I do know my way around Windows PowerShell, and I was able to put something together that would meet all of the requirements. So without further ado, let us dive into the script that I wrote, which is also available on the Script Repository: SCOM Agent Installation and Reporting Script.

Requirements for script

Our goal for this script is to query Active Directory for all servers, query SCOM for all currently monitored servers on the network, and then filter out all of the systems that are in Active Directory so we only have unmanaged servers to work with. We can then attempt to discover those servers (a requirement before we can push an agent installation), filter out failed discoveries so only the successes remain, and finally attempt the installation of the agent on the rest of the servers. Lastly, we need to generate a report that lists Successful Installations, Failed Installations, and Failed Discoveries, and email it to the system administrators. Simple enough with Windows PowerShell!

For this script to run properly and do what we need it to do, we need to make sure that we can connect to Active Directory to gather all of the servers. We also need to ensure that we are running the script from a server that has the SCOM snap-in available. You can find this by running the following command.

Get-PSSnapin –Registered

Image of command output 

You should see the following item listed with everything else: Microsoft.EnterpriseManagement.OperationsManager.Client

This means that the SCOM snap-in is installed, and we can run this script from the server without worrying about it failing.

Digging into the script

Let us take a look at the first parts of the script.

$VerbosePreference = 'continue'

Function Get-Server {

    $strCategory = "computer"

    $strOS = "Windows*Server*"

    $objSearcher = [adsisearcher]""

    $objSearcher.Filter = ("(&(objectCategory=$strCategory)(OperatingSystem=$strOS))")

    $objSearcher.pagesize = 10

    $objsearcher.sizelimit = 5000

    $objSearcher.PropertiesToLoad.Add("dnshostname") | Out-Null

    $objSearcher.Sort.PropertyName = "dnshostname"

    $colResults = $objSearcher.FindAll()

    foreach ($objResult in $colResults) {

        $objComputer = $objResult.Properties

        $objComputer.dnshostname

    }

}

Here, I set up the VerbosePreference to “Continue” so I can track what the script is doing and where it is, in case something goes wrong. It is always good to include some sort of verbose/debug output in your scripts, so that not only you, but whoever uses your script will know what is going on during its use. Using my Get-Server function is a quick and simple way to pull a list of servers from Active Directory, and I add the DNSHostName attribute that I can use in my comparison later on. I chose the DNSHostName attribute because it matches up with the data that I will later receive when performing the SCOM query.

###User Defined Parameters

Write-Verbose ("[{0}] Reviewing user defined parameters" -f (Get-Date))

#SCOM Management Server

$SCOMMgmtServer = 'SCOMMGMT.rivendell.com'

#SCOM RMS Server

$SCOMRMS = "SCOMMGMT.rivendell.com"

#Systems to Exempt from SCOM

$Exempt = @(Get-Content Exempt.txt)

$Emailparams = @{

    To = 'boeprox@rivendell.com'

    From = 'SCOMAgentAudit@rivendell.com'

    SMTPServer ='Exch.rivendell.com'

    Subject = "SCOM Agent Audit"

    BodyAsHTML = $True

}

Here I have my “user defined” parameters, where you need to update the existing parameters that match your environment. Listed are places for the SCOM management and RMS server, in addition to an optional exempt parameter in case you have systems that you cannot (for various reasons) have the SCOM agent installed on. If you notice the $EmailParams parameter, you will see that it is actually a hash table of several parameters that will be used at the end of this script for an email notification. By the way, this is called “splatting.” Learn it, live it, love it.

#Get list of AD Servers

Write-Verbose ("[{0}] Getting list of servers from Active Directory" -f (Get-Date))

$ADServers = Get-Server | Where {$Exempt -NotContains $_}

 

#Initialize SCOM SnapIn

Write-Verbose ("[{0}] Loading SCOM SnapIn" -f (Get-Date))

Add-PSSnapin Microsoft.EnterpriseManagement.OperationsManager.Client -ErrorAction SilentlyContinue

 

#Make SCOM Connection

Write-Verbose ("[{0}] Connecting to SCOM RMS Server: {1}" -f (Get-Date),$SCOMRMS)

New-ManagementGroupConnection -ConnectionString $SCOMRMS | Out-Null

 

#Connect to SCOM Provider

Write-Verbose ("[{0}] Connecting to SCOM Provider" -f (Get-Date))

Push-Location ‘OperationsManagerMonitoring::’

Write-Verbose ("[{0}] Connecting to SCOM Server: {1}" -f (Get-Date),$SCOMMgmtServer)

$MgmtServer = Get-ManagementServer | Where {$_.Name -eq $SCOMMgmtServer}

 

#Get all SCOM Agent Servers

Write-Verbose ("[{0}] Gathering all SCOM managed systems" -f (Get-Date))

$SCOMServers = Get-Agent | Select -Expand NetworkName | Sort

 

#Compare list to find servers not in SCOM

Write-Verbose ("[{0}] Filtering out all Non SCOM managed systems to audit" -f (Get-Date))

$NonSCOMTEMP = @(Compare-Object -ReferenceObject $SCOMServers -DifferenceObject $ADServers | Where {

    $_.SideIndicator -eq '=>'

} | Select -Expand Inputobject)

Now we are starting to perform some actions to get this script rolling. First, I use my little function to grab a list of all of the servers in Active Directory and filter out all of the systems that I listed in my exempt list. The next part loads up the SCOM snap-in so we are able to make use of the SCOM cmdlets. Next, I make the connection to the SCOM management server that was specified earlier in the script. When we have that connection, I switch directories to the “OperationsManagerMonitoring::” provider, which is required to run the commands later in the script.  After all of this, I begin my query of SCOM for all servers currently being managed via the SCOM agent, and I use Compare-Object to filter out the servers from my Active Directory list that are already listed in SCOM. We now have our list of servers that we need to focus on to install the SCOM agent.

#Attempt to Discover Systems

Write-Verbose ("[{0}] Configuring SCOM discovery prior to use" -f (Get-Date))

$Discover = New-WindowsDiscoveryConfiguration -ComputerName $NonSCOM -PerformVerification -ComputerType "Server"

$Discover.ComputerNameDiscoveryCriteria.getComputernames() | ForEach {Write-Verbose ("{0}: Attempting to discover" -f $_)}

Write-Verbose ("[{0}] Beginning SCOM discovery" -f (Get-Date))

$DiscResults = Start-Discovery -WindowsDiscoveryConfiguration $Discover -ManagementServer $MgmtServer

 

#Check Alert history for failed Discoveries

Write-Verbose ("[{0}] Checking for failed Discoveries" -f (Get-Date))

$alerts = @(Get-Alert -Criteria "PrincipalName = '$SCOMMgmtServer' AND MonitoringClassId='ab4c891f-3359-3fb6-0704-075fbfe36710'`

AND Name='An error occurred during computer verification from the discovery wizard'") | Where {   

    #Look for unresolved alerts

    $_.ResolutionState -eq 0

}

Here I am setting up for my attempted discovery of the servers that need the SCOM agent installed.  I use my current collection of servers that is supplied to the ComputerName parameter for New-WindowsDiscoveryConfiguration, which I save to $Discover. I then supply this variable to the Start-Discovery cmdlet, and save the results of this discovery to $DiscResults, which looks something like this:

Image of command output

This can be used later when I prepare to push out the SCOM agent installations.

Now that I went through the discovery process, a check is performed against the SCOM management server by using Get-Alert. I supply the principal name of the management server, filter to look only for failed discoveries, and save any results that are found so they can be parsed later and added to a collection for reporting.

If ($Alerts.count -gt 0) {

    #Start processing the failed discovery alerts

    $alert = $alerts  | Select -Expand Parameters

    $Pattern = "Machine Name: ((\w+|\.)*)\s"

    $FailedDiscover = $alert | ForEach {   

        $Server = ([regex]::Matches($_,$Pattern))[0].Groups[1].Value

        Try {

            $ServerIP = ([net.dns]::Resolve($Server).AddressList[0])

        } Catch {

            $ServerIP = $Null

        }

        If (-Not ([string]::IsNullOrEmpty($Server))) {

            New-Object PSObject -Property @{

                Server = $Server

                Reason = $_

                IP = $ServerIP

            }

        }

    }

    <#

    Resolve the alerts for failed discoveries, otherwise we will have false positives that there were no failed discoveries.

    #>

    Write-Verbose ("[{0}] Resolving active alerts" -f (Get-Date))

    $Alerts | ForEach {

        Resolve-Alert -Alert $_ | Out-Null

    }

}

If failed discoveries are found in the alert log, the script digs out the Parameters property of the $alert collection, and the systems will get parsed from the log by using some regular expression magic. Then an attempt to get the IP address will be performed. The expanded Parameters property will look similar to this:

Computer verification failure for Machine Name: DC1.Rivendell.com is 0x800706BA. The RPC server is unavailable.

Any generated reports will be saved to the $FailedDiscover variable that will be sent in the email report at the end of the script. After this is done, all of the alerts that are found are resolved by the script.

If ($DiscResults.CustomMonitoringObjects.count -gt 0) {

    #Install Agent on Discovered Servers

    Write-Verbose ("[{0}] Beginning installation of SCOM Agent on discovered systems" -f (Get-Date))

    $DiscResults.custommonitoringobjects | ForEach {Write-Verbose ("{0}: Attempting Agent Installation" -f $_.Name)}

    $Results = Install-Agent -ManagementServer $MgmtServer -AgentManagedComputer: $DiscResults.custommonitoringobjects

Now for the installation of the systems that we were able to successfully discover! If you remember, I saved the results of the discovery to the $DiscResults variable. Now I am able to use that to supply the collection of systems for the agent installation by using the CustomMonitoringObjects property of the $DiscResults collection. Note that I have saved the results of this agent installation to $Results that will be parsed later in the script, and those results will be included in the email report. 

    #Check for failed installations

    $FailedInstall = @{}

    $SuccessInstall = @{}

    Write-Verbose ("[{0}] Checking for Failed and Successful installations" -f (Get-Date))

    $Results.MonitoringTaskResults | ForEach {

 

        If (([xml]$_.Output).DataItem.ErrorCode -ne 0) {

            #Failed Installation

            $FailedInstall[([xml]$_.Output).DataItem.PrincipalName] = `

            (([xml]$_.output).DataItem.Description."#cdata-section" -split "\s{2,}")[0]

        } Else {

            #Successful Installation

            $SuccessInstall[([xml]$_.Output).DataItem.PrincipalName] = `

            Get-Date ([xml]$_.output).DataItem.Time

        }

    }

}

And by later in the script, I mean now. I first create two empty hash tables that will hold the successes and failures that are found. The results of the agent installation were first split based on the error code that is in the XML data. From there, if a failure is detected, the code continues to parse the failure message from the XML and place it into the $FailedInstall hash table. A failed installation result will look similar to this:

Image of command output

The task will register as a success, so I have to dig into the output XML to pull the actual error code for the agent installation. If it is a successful installation, the system is added to the $SuccessInstall hash table, which will be sent in the email report at the end of the script.

$head = @"

<style>

    TABLE{background-color:LightYellow;border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;}

    TH{border-width: 1px;padding: 5px;border-style: solid;border-color: black;}

    TD{border-width: 1px;padding: 5px;border-style: solid;border-color: black;}

</style>

"@

 

If ($SuccessInstall.Count -gt 0) {

Write-Verbose ("[{0}] Adding {1} Successful installations to report" -f (Get-Date), $SuccessInstall.Count)

$html1 = @"

<html>

    <body>

        <h5>

            <font color='white'>

                Please view in html!

            </font>

        </h5>

        <h2>

            The following servers were found in Active Directory and had the SCOM Agent successfully installed:

        </h2>

    $($SuccessInstall.GetEnumerator() | Select Name, Value | Sort Name | ConvertTo-HTML -head $head)

    </body>

</html>

"@

} Else {

    $html1 = $Null

}

 

If ($FailedInstall.Count -gt 0) {

Write-Verbose ("[{0}] Adding {1} Failed installations to report" -f (Get-Date), $FailedInstall.Count,)

$html2 = @"

<html>

    <body>

        <h5>

            <font color='white'>

                Please view in html!

            </font>

        </h5>

        <h2>

            The following servers are Active Directory and were discovered, but Failed to install the SCOM Agent:

        </h2>

    $($FailedInstall.GetEnumerator() | Select Name,Value | Sort Name | ConvertTo-HTML -head $head)

    </body>

</html>

"@

} Else {

    $html2 = $Null

}

 

If ($FailedDiscover.Count -gt 0) {

Write-Verbose ("[{0}] Adding {1} Failed Discoveries to report" –f (Get-Date), $FailedDiscover.Count)

$html3 = @"

<html>

    <body>

        <h5>

            <font color='white'>

                Please view in html!

            </font>

        </h5>

        <h2>

            The following servers are Active Directory but Failed to be Discovered by SCOM:

        </h2>

    $($FailedDiscover | Sort Server | ConvertTo-HTML -head $head)

    </body>

</html>

"@

} Else {

    $html3 = $Null

}

 

If ($html1 -OR $html2 -OR $html3) {

    $Emailparams['Body'] = "$($Html1,$Html2,$Html3)"

} Else {

    $Emailparams['Body'] = @"

<html>

    <body>

        <h5>

            <font color='white'>

                Please view in html!

            </font>

        </h5>

        <h2>

            All servers in Active Directory are currently being managed by SCOM Agents.

        </h2>

    </body>

</html>

"@

}

Write-Verbose ("[{0}] Sending Audit report to list of recipients." -f (Get-Date))

Send-MailMessage @Emailparams

We are now at the end of the script where the data we have collected is compiled into HTML and added to the body of the email. Take note of the @EmailParams that is supplied to the Send-MailMessage cmdlet. This is “splatting” being used to supply all of the parameters to the cmdlet. Although I am sure that my HTML code could be a little better, it does well enough to provide a nice readable report to review. If there is nothing to report, an email will still go out. This is a reminder that if a report didn’t go out, it should be investigated for possible issues.

Script in action

Typically, this script is better run as a scheduled job to ensure that any server being brought into the domain receives the SCOM agent. But for this example, I am going to run it to show the verbose output that is generated and to show the email notification showing the HTML body.

Image of command output

The report that is emailed will look something like this, based on the data that was received during the duration of the script.

So there you go…

With a little research and testing against a platform that I was not all that familiar with at the time, I was able to put together a nice script. My script automated the installation of SCOM agents for new servers that were brought into the domain, and provided a report on the installations and failures.

~Boe

Thank you, Boe, for sharing. As always, it is an interesting and informative blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

The Scripting Wife Uses PowerShell to Get Hardware Information

$
0
0

Summary: The Scripting Wife needs to get ready for the 2012 Scripting Games, and she uses Windows PowerShell to get hardware info from her laptop.

Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I were in Myrtle Beach, where I was working on the Scripting Games over the weekend. The trip to the Carolina Beaches is rapidly becoming an annual migration of the rare but not endangered Script Monkey. Anyway, I was on the lanai reading my email this morning, when I heard her—or more accurately—I felt her enter the room.

“So where are you taking me for Valentine’s Day?” she queried.

“I am taking you to the Charlotte IT Pro User Group meeting,” I said.

“You’re kidding. Right?”

“Nope, that is why I took you to Ruth’s Chris Steak House on Friday night. You knew that.”

“You took me to Ruth’s Chris because you had a gift card,” she countered.

“You knew it was your Valentine’s Day dinner. We had a lovely time. Now what’s the problem?” I asked.

“No problem. I was just trying to see if you were awake. Good morning, my little Script Monkey.”

“By the way, what are you doing up so early? It is not yet noon,” I queried.

“Ha ha. So funny,” she intoned. “Just so you will know, I have been up for hours. I am working on the 2012 Scripting Games Study Guide.”

“Uh huh. I see. Well, that is great. Good luck. Let me know how it is working for you,” I replied pleasantly enough.

“Well, my little Script Monkey, I do have a question.” She was dripping with sincerity.

“Yes,” I replied with more than a modicum of hesitancy. (I was never going to get through the 2,000 emails in my inbox at this rate.)

“I am a little rusty with Windows PowerShell. In fact, I have pretty much not touched it since last year,” she confessed.

“Huh? But all that time you spend at your computer. I thought you were at least writing a script.”

“Facebook. Twitter. PowerScripting PodCast. My blog. I get busy. Besides, if I really need a script, I figure you will write it.”

“You might just figure wrong too,” I said smiling. “So what is your problem?”

“Well, I forget how to find stuff out about hardware. I figure that will be the first event, because your Study Guide lists it first.”

I smiled, but did not commit myself. Paused for a second. Then replied, “OK, sit down.” She took a seat on the lanai swing.

“Now, to find out stuff about the hardware in Windows PowerShell 2.0, it is normally going to require using WMI,” I continued.

“I would use the Get-WMIObject cmdlet to find that out?” she asked.

“Absolutely. Now, write a command that will get information from the BIOS,” I said.

“Well, how do I know that? That is why I am here, silly,” she chastised.

“OK. One of the nice things about the Get-WmiObject cmdlet is that it can tell you what class you need. To do this, use the List parameter, and then use wild cards around the word BIOS.”

I slid the laptop over to her. She thought for a minute, and hesitantly typed the following:

Get-w<tab><tab><space>-l<tab><space>*bios*

Note   (In the commands that follow, <tab> represents one press of the Tab key. One press of the Space bar  is <space>. If you see two tab characters, such as <tab><tab>, it means press the Tab key twice. For most computers, this will probably work, but depending on what modules and other items are loaded in your environment, you may need to press the Tab key more than the indicated number of times. Always compare your completed command with the commands that are indicated as complete.

When the Scripting Wife finished typing, she turned the laptop back to me and smiled. “Is this the right command?” she asked.

I looked at it. The command she had composed is shown here.

Get-WmiObject -List *bios*

“That is exactly right. Now, which WMI class do you think actually reports only information about the BIOS on your computer?” I asked.

She looked at the output, which is shown in the image that follows.

Image of command output

“I think the Win32_BIOS is the right WMI class…because it sounds right, and the properties sound like BIOS things,” she postulated.

“And…”

“And it begins with a Win32 which sounds like Windows. You told me to always use WMI classes that start with Win32,” she said.

“Very good. Now, go ahead and query that class,” I said.

She started to ask how, then bent over the laptop and began to type. She did the following:

  1. She hit the Up arrow once and retrieved the previous command.
  2. She hit the backspace 12 times and erased everything but the Get-WmiObject command.
  3. She highlighted the Win32_Bios class name with her mouse from the list in her output window, and then pressed ENTER.
  4. She right-clicked just after the Get-WMIObject command.

The command she created is shown here.

Get-WmiObject Win32_BIOS

Following are the command and its associated output.

Image of command output

“It seems that my screen is getting cluttered up,” she complained.

“Type the command cls…that is a shortcut for the Clear-Host function,” I instructed.

The scripting wife typed the following:

Cls<enter>

“Now, just for fun, use the Get-History cmdlet to see what commands you have typed,” I instructed.

She typed h (a shortcut for the Get-History cmdlet name). Here is her command.

h<enter>

The output is shown in the image that follows.

Image of command output

“Press the Up arrow four times, erase the word bios*, and replace it with *processor*,” I suggested.

The Scripting Wife typed the following commands.

<up arrow><up arrow><up arrow><up arrow><backspace><backspace><backspace><backspace><backspace>*processor*<enter>

The command she created is shown here.

Get-WmiObject -List *processor*

The command and its associated output are shown in the image that follows.

Image of command output

“Now, can you pick out the WMI class that will tell you about the processor on your computer?”

“The Win32_Processor class will do that because it begins with Win32, and it has properties like AddressWidth, Architecture, and Availability,” she said.

“Absolutely right. Now go ahead and clear your screen, and then query the class.”

She did not even hesitate. Not for a second. She immediately did the following.

  1. Highlighted Win32_Processor with her left mouse button, and pressed ENTER.
  2. Typed Cls<enter>.
  3. Used the Up arrow twice to retrieve the previous Get-WmiObject –list *Processor* command.
  4. Used the backspace to erase everything but the Get-WMiObject command, and right-clicked the screen to paste Win32_Process.

The command she that created is shown here.

get-wmiobject Win32_Processor

The output from the command is shown here.

Image of command output

The Scripting Wife handed the laptop back to me. She smiled and jumped off the swing.

“Well, I guess I will see ya,” she said cheerily.

“Huh,” I asked.

“Yep. If we have to go to the Charlotte IT Pro User Group for Valentine’s Day, then it is going to take me all day to get ready,” she explained.

“I see. And why is that?”

“Well, I have to get my hair done. Then I have to go to the spa and get a manicure and a pedicure. I think I will invite my friends out for lunch, and then we will go shopping so I can get a new outfit to wear to the meeting. Really. Don’t you know anything?”

With that, she bounded out the front door of the lanai, and was gone. Really, sometimes I wonder if I do know anything about the Scripting Wife.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

The Scripting Wife Uses PowerShell to Find Service Accounts

$
0
0

Summary: The Scripting Wife interrupts Brahms to learn how to use Windows PowerShell to find service accounts and service start modes.

Microsoft Scripting Guy, Ed Wilson, is here. One of life’s real pleasures is sitting around a fireplace, listening to a Brahms concerto, and sipping a cup of chamomile tea. I like to add a bit of local honey, and drop in a cinnamon stick. So here I am…mellow and as relaxed as a cat lying in a bay window on a warm summer afternoon. The Charlotte SQL User Group meeting tonight will be awesome. We have not seen Chris Skorlinski (the speaker) since the Raleigh SQL Saturday, so we are excited to go. The Scripting Wife and I will have a great time, and it is nice chance to see some friends we have not seen for a while.

Anyway, now it is time for a warm fire, a little Brahms, and a cup of warm (but not boiling) tea. About to nod off, I was suddenly startled back into reality as the overhead light suddenly switched on.

“How can you see in here in the dark,” the Scripting Wife exclaimed.

“There was nothing to see—I was listening to Brahms,” I began.

“You need to turn that racket down. The neighbor’s dog is beginning to howl. I think he prefers Trace Adkins to that classical stuff anyway,” she continued, “As long as you are awake, I have a problem with a Windows PowerShell command.”

“I see. I think it is you who likes Trace Adkins.”

“Yep, but don’t sidetrack me with talk about Trace Adkins, I need to be prepared for the 2012 Scripting Games so I do not embarrass you or me. Now back to what I came to ask you. I am trying to figure out what account a service uses to start, and I don’t see it. “

“And…”

“And nothing. I type Get-Service, and I do not see anything about service user accounts.”

“Show me your command,” I wearily asked.

“It is right here. Nothing hard…see?”

She plopped down beside me on the sofa and showed me her laptop. She had typed the single command shown here.

Get-Service

The command and the output from the command are shown in the image that follows.

Image of command output

“You know that there is more information don’t you?” I asked.

“Well, duh,” she said. “OK, I will clear the screen and send the output to the Format-List cmdlet.”

Here is what the Scripting Wife did to clear the screen and to obtain all the information available from the Get-Service cmdlet.

  1. She cleared the screen by using the Clear-Host command. But instead of typing Clear-Host, she used the cls shortcut command instead.
  2. Next, she pressed the Up arrow one time to retrieve the previous Get-Service command.
  3. She then typed a space <space> by tapping the Space bar one time, and then she typed a pipe character (the pipe character | is located above the Enter key on my keyboard).
  4. She then typed a space and Format-List * after the pipe character.

The complete command is shown here.

Get-Service | Format-List *

The command and the associated output from the command are shown in the image that follows.

Image of command output

“OK. I am looking at this output, and I still do not see anything about the service account that a service uses to start up,” she complained.

“Well, I did not say it was there, did I? I just asked you if you had looked at all of the information that the Get-Service cmdlet provides,” I stated. “To find the service account start-up information, you need to use WMI. Remember yesterday when we talked about Using PowerShell to Get Hardware Information? You can use the same technique today as you used yesterday.”

The Scripting Wife thought for a few seconds, and then she typed the following command.

Get-WmiObject –list *service*

“Wow, that is a lot of information,” she exclaimed. She turned the laptop monitor so I could look at the display. Indeed, as is shown here, it is a lot of information.

Image of command output

“Use the same technique that you used yesterday to find the WMI class you need to work with services,” I prompted.

Within a few minutes, the Scripting Wife was pointing at Win32_Service.

“Now use the Get-WmiObject cmdlet to query that WMI class,” I said.

It did not take her long to modify her command line to query the Win32_Service WMI class. Here is the command she composed.

Get-WmiObject Win32_Service

The command and the associated results are shown in the image that follows.

Image of command output

“OK, so where are the service accounts?” she asked.

“Remember, you need to use the same technique that you used with the Get-Service cmdlet to retrieve all the information,” I said.

She thought for a bit, then pressed the Up arrow to retrieve the previous command. Then she added a pipeline character and the Format-List cmdlet. The revised command is shown here.

Get-WmiObject win32_service | format-list *

The command and its associated output are shown in the image that follows.

Image of command output

“So where is the service account name?” she asked.

“Look closely at the output. See where it says StartName? That is the service account. See where it says StartMode? That is the way the service starts,” I said, “Why don’t you create a table with just the Name, StartName, and StartMode.”

This time the Scripting Wife did not hesitate. She first cleared the screen, then used the Up arrow to retrieve the previous command. She then edited it by changing it to a Format-Table command. The command that she arrived at is shown here with its associated output.

Image of command output

“That’s cool,” she said.

And with that, she was gone. Just in time for the Andante movement in D-major. Brahms may not have had Windows PowerShell in mind when he wrote, but somehow it seems to fit.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Easy Commands to Teach Your Friends about PowerShell

$
0
0

Summary: The Scripting Wife learns about easy Windows PowerShell commands so she can teach her friends about Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. This morning is rather cool, but I am sitting on the front porch sipping a cup of pomegranate tea that I brought back with me from Montreal. It is a nice caffeine-free tea infused with hibiscus and cinnamon. Because it is naturally caffeine free, I do not need to worry about getting jittery. I can sip it all day long if I wish. It goes well with Anzac biscuits, and it makes a nice mid-morning break. I have completed answering scripter@microsoft.com email, checked comments posted to the Hey, Scripting Guy! Blog, checked out things on Twitter and Facebook, and I am now ready to start working on events for the 2012 Scripting Games. I have had a great response from Microsoft MVPs and others in the Windows PowerShell community in response to my query about judging the 2012 Games.

Anyway, let me think about the games while I sip this fine cup of tea…

“There you are,” the voice cracked the tranquil morning stillness like someone dropping a tray of dishes in the dining room of a fine restaurant. “I have been looking all over for you.”

“I have been here,” I replied. “This must have been the first time you came by this location.”

“Don’t be cute,” she bristled. “Obviously this is the first time I have seen you since I began looking.”

I spied her laptop. It was nestled under her arm like a football in the arms of a running back heading towards the end zone.

“Is there something I can help you with?” I politely asked.

“You know you should not end a sentence with a preposition,” she stated sarcastically.

“Is there something I can help you with, my dear,” I revised.

“Much better. Yes, there is something with which you can assist,” she rejoined. “I am curious, what are the easiest Windows PowerShell cmdlets to use?”

“In my mind, the easiest cmdlets to use are the ones that just seem to work when you type the command and press ENTER. For example, open the Windows PowerShell console, and type Get-Process,” I instructed.

The Scripting Wife opened the Windows PowerShell console, and typed the following:

Get-Pr<tab><enter>

The command and its associated output are shown in the image that follows.

Image of command output

“Another easy Windows PowerShell command is the Get-Service cmdlet, which returns information about services on the computer. Go ahead and try it,” I suggested.

The Scripting Wife typed the following command:

Get-s<tab><enter>

The resulting command and its output are shown in the following image.

Image of command output

“Another easy cmdlet is the Get-Date cmdlet. It retrieves the current date and time from computer. Why don’t you try it as well,” I said.

The Scripting Wife quickly typed the following:

Get-D<tab><enter>

The command and its associated output are shown in the image that follows.

Image of command output

“One other cmdlet that is very useful, and is also extremely easy to use is the Get-Hotfix cmdlet. It displays a listing of all the hotfixes that are installed on the computer. Go ahead and give it a try,” I suggested.

She typed the following keystrokes.

Get-hot<tab><enter>

When she pressed ENTER, the computer paused for a second, and then the output that is shown in the following image appeared.

Image of command output

“That is pretty cool,” the Scripting Wife said.

“There are two other commands that are really useful, and really easy to use: Get-History, which shows you all of your previously typed commands, and Start-Transcript, which records the commands and the associated output. Why don’t you type Get-History and see what it displays,” I suggested.

She typed the following characters:

Get-Hist<tab><enter>

She then used the Up arrow to recall the previous command, and then entered it again. The output is shown in the following image.

Image of command output

“Let me show the output from a transcript,” I said as I turned my laptop screen towards her. The transcript is shown in the image that follows.

Image of command output

“Yes, there are other really easy cmdlets to use, such as Get-Culture, Get-Acl, Get-ChildItem, and Get-Random. They all return information, but they are not quite as immediately useful as the previous cmdlets,” I said, “By the way, why do you ask?”

“Well, I was talking to one of my friends about the Scripting Games, and she asked me about Windows PowerShell. I thought the best way to tell her about Windows PowerShell was to show her, and so I wanted to know what cmdlets would be best to use,” she replied.

“Cool,” I said. There was really nothing else for me to say, except perhaps, “Real cool.”

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Scripting Wife Learns to Work with Text Files

$
0
0

Summary: The Scripting Wife learns to use Windows PowerShell to write to text files and to include the date in a log.

Microsoft Scripting Guy, Ed Wilson, is here. The other night the Scripting Wife and I went to the Charlotte IT Pro User Group meeting where I participated in a panel discussion about various Microsoft approaches to various problems. It was a very fun and educational experience. Anyway, before we left the meeting, I agreed to participate in their IT Pro appreciation day. It will be an intense all day training. I will be talking about why IT Pros need to learn Windows PowerShell. I believe the Scripting Wife even volunteered to assist with registration (or some other myriad of details that these types of all-day events generate).

Speaking of the Scripting Wife, I came downstairs to look for her, but I do not seem to find her. I can see where she has been—a book turned upside down to mark the page, a cup of hot chocolate still steaming, a plate full of Tim Tams and her Windows 7 phone. These clues all seem to indicate that she has not been gone for long, nor will it be an extended time before she returns. The bad thing is that with her abandoning her Windows 7 phone I cannot call her to ask her whereabouts—that may be the idea.

I had just picked up one of her Tim Tams and was getting ready to consume it when she suddenly materialized in the doorway.

“Don’t you dare eat all my Tim Tams,” she chastised.

“I did not know there were any Tim Tams left,” I remarked.

“That is why they still exist,” she countered, “You have the Anzac Biscuits, and I have the Tim Tams. That was the deal.” She is not fooled; she knows that I will share my Anzac Biscuits with her.

“Yes you are right. Perhaps I only wanted to see if it was still good,” I lamely offered.

“Perhaps you only wanted to be a sneak and eat up all my Tim Tams,” she corrected. “As long as you are here, why don’t you make yourself useful? I need to know about creating files.”

Write process information to a text file

“OK. Why don’t you sit down at your computer and open the Windows PowerShell console,” I suggested. “To begin with, write the results from the Get-Process cmdlet to a text file by using two right redirection arrows. The right redirection arrow points to the right, and on your particular keyboard it appears on the bottom row of the keyboard above the period. Just type Get-Process and redirect the output to a file called myprocesses.txt. Use your scratch directory, the one called FSO which is off of drive C.”

The Scripting Wife thought for a second, and appeared to study her keyboard.

“So all I need to do is this?” and she began to type. The following are the exact keyboard sequences that she typed.

Get-pr<tab><space>>><space>C:\fso\myprocesses.txt<enter>

“Nothing happened,” she said.

“Exactly. Nothing returns when you use the redirection arrows to create a file. Now open it in Notepad.”

The Scripting Wife moved her hand to the mouse, and began navigating through the menus.

“Stop,” I said. “Put down that mouse. It is easier and faster to open Notepad and the file all at once. Do this, type the word notepad and follow it with the path to your file you just created.”

The Scripting Wife typed notepad and then began to type the path to the file.

“One thing to keep in mind is you can use Tab Expansion to fill in the path. It is much faster when you do that.”

“OK,” she said.

Open a text file the easy way

The following is exactly the command the Scripting Wife typed.

Notepad c:\f<tab>\my<tab><enter>

The image that follows illustrates the first command (which creates the file) and the second command (which opens the file).

Image of command

The newly created text file is shown in the image that follows.

Image of command output

“By the way, it is not necessary to type the word notepad in front of the path to the text file because the default file association of a .txt file is with notepad.exe. This means that it will automatically execute if all you do is supply the path to the file,” I said. “Why don’t you go ahead and try it?”

Open a text file with an even easier way

The Scripting Wife uses the Up arrow to recall the previous command, and she erases the word notepad by using the backspace. The revised command is shown here.

C:\fso\myprocesses.txt

“The double redirection arrow appends to a text file. This means if you run your command to write process information to a text file, it will add the new content to the bottom of the file. This is a handy feature when you want to add information to a text log file,” I said. “So why don’t you recall your previous command that writes process information to the text file, and run it again?”

Append to a text file

The Scripting Wife used the Up arrow a couple of times until she recalled the line that is shown here. Then she pressed the ENTER key to run the command a second time.

Get-Process >> c:\fso\myprocesses.txt

Next she recalled the command to open the text file in Notepad. Her Windows PowerShell console is shown in the image that follows.

Image of command

The revised text file is shown here.

Image of command output

“If you want to keep only one copy of information, instead of creating a continuous log, for example, you can use a single redirection arrow, instead of using two redirection arrows,” I said. “To make it easy to see the difference, change your Get-Process command that writes to a text file to Get-Service instead. Also, change the double redirection arrow to a single one.”

Overwrite a text file

The Scripting Wife used the Up arrow a couple times until she had the Get-Process command on the command line. Then she edited the line so that it appears like the one here.

Get-service > c:\fso\myprocesses.txt

She then opened it in Notepad by recalling the notepad command that is shown here.

notepad C:\fso\myprocesses.txt

The modified text file is shown in the following image.

Image of command output

“Well, what do I do if I want to write the date and time that I get this information at the top of the file?” the Scripting Wife asked.

Add the date to log files

“That is a great question! I think the easiest way to do that is to make two commands. Remember that the semicolon is a command separator? All you need to do is to write the date to the file, and then append the process information to the same file. The key, here, is to first use the “overwrite” (the single redirection arrow) so that your date will appear on the top of the file. Then it is important to use the “append” (the double redirection arrow) to add the process information to the bottom of the file,” I instructed. “Go ahead and give it a try.”

The Scripting Wife thought for a little bit, and then she arrived at the command that is shown here.

get-date > c:\fso\myprocesses.txt; Get-Process >> c:\fso\myprocesses.txt

She then opened the file in Notepad. The text file is shown in the following image.

Image of command output

“What if I want to keep adding stuff to the log, and I do not want it to overwrite everything?” she asked.

“In that case, you change the first single redirection arrow to a double redirection arrow,” I said.

The Scripting Wife thought for a second, recalled the previous command by using the Up arrow, and modified it such that it is shown here.

get-date >> c:\fso\myprocesses.txt; Get-Process >> c:\fso\myprocesses.txt

The commands and associated output are shown in the image that follows.

Image of command output

“Thanks. Well, I am out of here. Stay out of my cookies,” she said.

“Huh? What,” I asked.

“Well you are going to be in an all-day conference call today, so there is no reason for me to hang around and be in your way. I have been planning a girl’s day out today for some time,” she said.

“Sounds expensive,” I hesitantly suggested.

“Not too bad, but you better hope the stock market goes up today just in case,” she smiled and evaporated before my very eyes. Sometimes I wonder …

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Save Time with Win32_TimeZone and DST Time Changes

$
0
0

Summary: Write a Windows PowerShell function to determine the status of time changes to daylight savings time.

Microsoft Scripting Guy, Ed Wilson, is here. At the Windows PowerShell User Group meeting in Charlotte, North Carolina, Brian Wilhite talked about a few of his scripts during the script club. I said, “Dude, that is cool, and I immediately challenged Brian to share his scripts with us. The result is two excellent guest postings. Here is the first one.

Brian Wilhite works as a Windows System Administrator for a large health-care provider in North Carolina. He has over 15 years of experience in IT. In his current capacity as a Windows SysAdmin, he leads a team of individuals that have responsibilities for Microsoft Exchange Server, Windows Server builds, and management and system performance. Brian also supports and participates in the Charlotte PowerShell Users Group.
Twitter: Brian Wilhite

Here’s Brian…

Just prior to the latest time change in the fall, I was asked to run a time verification script on all of our computers running Windows Server. Typically, this was done after the time change, between 1:00 AM and 2:00 AM. Being an IT Guy, I’m used to long days and/or long, late nights, but I thought to myself, “I wonder if I can be proactive in knowing which servers may have problems making the change this year.” So I researched, and found the Win32_TimeZone WMI Class and all the goodness therein. Here is some of what I found:

Get-WMIObject –Class Win32_TimeZone

DaylightDay         : 2

DaylightDayOfWeek   : 0

DaylightMonth       : 3

DaylightName        : Eastern Daylight Time

Description         : (UTC-05:00) Eastern Time (US & Canada)

StandardDay         : 1

StandardDayOfWeek   : 0

StandardMonth       : 11

StandardName        : Eastern Standard Time

So…

Like I always do when I evaluate WMI classes, I performed a BING search on MSDN Win32_TimeZone.

After I reviewed all of the information there, specifically DaylightDay, DaylightDayOfWeek, etc., I determined that I could, in fact, proactively identify servers that could have time change issues on the day of the time change. 

This excerpt from the MSDN website sums it all up:

If the transition day (DaylightDayOfWeek) occurs on a Sunday, then the value "1" indicates the first Sunday of the DaylightMonth, "2" indicates the second Sunday, and so on. The value "5" indicates the last DaylightDayOfWeek in the month.

I took this information and started thinking, “I can write a function that will gather this information and report what it finds.” So here’s what I did…

First, I created the framework for an advanced function. It contains the usual items, such as: CmdletBinding, defining parameters, setting up the Begin, Process, Try, Catch, and End Script blocks. This is shown in the following image.

Image of function

Next, I started thinking of all the data that I wanted to capture. I know I will need the Win32_TimeZone WMI class. Because in the past, we always checked for current time, I will also query the Win32_LocalTime. Therefore, here is the start of the processing part of the Get-DSTInfo function:

 Image of function

As you see in the previous image, I casted the $LocalTime variable as a DateTime object by passing it a string that is comprised of variables made up from the Win32_LocalTime WMI class. I will use this later to create a custom PSObject, which will allow you to treat this property as a DateTime object, if needed.

I also captured the WMI class that’s going to gather all the data to make the magic happen, the Win32_TimeZone class. You may have noticed the switch statement for the $TimeZone.DaylightDay property. Because the value is numeric and not “display friendly,” I set up several switch statements. They are shortened for brevity, but you get the idea:

Switch ($TimeZone.DaylightDay)

{

1 {$DSTDay = "First"}

2 {$DSTDay = "Second"}

#From 1 to 5 signifying the First through Last week in the month.

}

Switch ($TimeZone.DaylightDayOfWeek)

{

0 {$DSTWeek = "Sunday"}

1 {$DSTWeek = "Monday"}

#From 0 to 6 to signify the days of the week.

}

Switch ($TimeZone.DaylightMonth)

{

1 {$DSTMonth = "January"}

2 {$DSTMonth = "February"}

3 {$DSTMonth = "March"}

#From 1 to 12 to signify months of the year.

}

OK. So consider the following properties of the $TimeZone object, which is an instance of Win32_TimeZone.

$TimeZone.DaylightDay = 2

$TimeZone.DaylightDayOfWeek = 0

$TimeZone.DaylightMonth = 3

These property values will mean the “spring ahead.” The time change will occur on the “Second (2) Sunday (0) of March (3)”.  The same thing holds true with the $TimeZone.StandardDay, $TimeZone.StandardDayOfWeek, and $TimeZone.StandardMonth—but obviously, they will have different values.

When all of my switch statements were setup (six in total), I created several objects depending on what parameters were passed when running the function. The two parameters that I defined were -Standard and -Daylight. Neither parameter is mandatory because running the function without parameters will return both Standard and Daylight time change information in the form of a PSObject for the local host that is stated. Here is the If statement that I used to accomplish neither parameter being passed:

If ((-not $Standard) -and (-not $Daylight))

{

$STND_DL = New-Object PSObject -Property @{

Computer=$Computer

StandardName=$STDTime

StandardDay=$STDDay

StandardDayOfWeek=$STDWeek

StandardMonth=$STDMonth

DaylightName=$DayTime

DaylightDay=$DSTDay

DaylightDayOfWeek=$DSTWeek

DaylightMonth=$DSTMonth

CurrentTime=$LocalTime

}#End $DL New-Object

$STND_DL = $STND_DL | Select-Object -Property Computer, StandardName, StandardDay, StandardDayOfWeek, StandardMonth, DaylightName, DaylightDay, DaylightDayOfWeek, DaylightMonth, CurrentTime

$STND_DL

As you may have noticed, I did something pretty cool. If you have played with custom objects in Windows PowerShell, you know that they don’t always display the properties the way you would like them to. So, I piped my custom PSObject ($STND_DL) to the Select-Object cmdlet to display the object properties in logical order as you read it—something that makes logical sense to a variable. 

This is what you get when you run the function; the output is an object that you can have your way with:

 Image of command output

Yes, you read that right, 3:49AM. Like I said, I’m not a stranger to long late nights—after all, I am an IT Guy.

Getting back to the topic…

You could also pipe computer names to Get-DSTInfo, something like the following:

Get-ADComputer –Filter * | Select –Expandproperty Name | Get-DSTInfo | Export-Csv –Path C:\TimeZoneInfo.csv

This would gather all the computers in your domain and run the Get-DSTInfo function against them, then export the findings to a .csv file. The way you would read the object output is that the time change would occur the First Sunday of November and Second Sunday of March.

This function allowed me to proactively review the time change information of all 1500+ servers in our domain. The great thing is that I did find a few that did not have the correct time change patch installed, and I was able to remediate that before it became an issue when the clocks turned back last fall. 

Thanks for listening to me ramble on about my simple but useful function. Happy PowerShelling!

~Brian

Thank you, Brian. The full script can be found on the Script Center Repository.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Find Last Logon Times for Virtual Workstations

$
0
0

Summary: Learn how to Use Windows PowerShell to find the last logon times for virtual workstations.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back guest blogger, Brian Wilhite. Brian was our guest blogger yesterday when he wrote about detecting servers that will have a problem with an upcoming time change due to daylight savings time. Here is a little bit about Brian.

Brian Wilhite works as a Windows System Administrator for a large health-care provider in North Carolina. He has over 15 years of experience in IT. In his current capacity as a Windows SysAdmin, he leads a team of individuals that have responsibilities for Microsoft Exchange Server, Windows Server builds, and management and system performance. Brian also supports and participates in the Charlotte PowerShell Users Group.
Twitter: Brian Wilhite

Take it away, Brian…

Several weeks ago our virtual guy asked me if there was a way to determine which virtual workstations have been recently used. I started thinking, and of course, the first place I turned was to Windows PowerShell. I did some research and found the Win32_UserProfile WMI class. However, the “minimum supported client” is Windows Vista with SP1, and the majority of our virtual workstations are running Windows XP. So the dilemma was to create a function that would provide the same type of information for computers running Windows XP and later. I evaluated the information that was returned from the Win32_UserProfile class. As you see in the following image, I indexed into the third object of the Win32_UserProfile array for brevity, and this is the information that’s available.

Image of command output

I wanted to provide the following information:

  • The computer from which the function was run against
  • The user account that was logged on last (security identifier or SID)
  • The last use time (LastUseTime)
  • Is the user currently logged on? (Loaded)

I’m going to use two methods to gather these four pieces of information. First, I’m going to use WMI to collect the information on computers running Windows Vista with SP1 and later. For computers running Windows Vista and earlier, I’m going to use user profile file properties and registry information to collect the needed data. We will discuss the WMI method first.

I am using the Win32_OperatingSystem WMI class to collect the build number to determine which method to use.

$Win32OS = Get-WmiObject -Class Win32_OperatingSystem -ComputerName $Computer

$Build = $Win32OS.BuildNumber

The “If ($Build -ge 6001)” is the first decision point. If the build number is 6001 and above, the script block will run.

If ($Build -ge 6001)

{

$Win32User = Get-WmiObject -Class Win32_UserProfile -ComputerName $Computer

I am using RegEx to filter the LocalService, NetworkService, and System profiles because they aren’t needed, and I am sorting by LastUseTime to pick the one most recently used.

$Win32User = $Win32User | Where-Object {($_.SID -notmatch "^S-1-5-\d[18|19|20]$")}

$Win32User = $Win32User | Sort-Object -Property LastUseTime -Descending

$LastUser = $Win32User | Select-Object -First 1

The Win32_UserProfile Loaded property determines if the user was logged on at the time the query was run. I’m casting that value into a new variable ($Loaded). I will create a New-Object with that property and value later.

$Loaded = $LastUser.Loaded

So now we’re looking at the LastUseTime property—the value is a “System.String” (20120209035107.508000+000), but I need to convert it to a “System.DateTime” object, so it’s readable, I will use the WMI ConvertToDateTime method to accomplish this.

$Time = ([WMI] '').ConvertToDateTime($LastUser.LastUseTime)

One of the things I need to do is take the SID that is collected via Win32_UserProfile and convert it to Domain\samAccountName format.

So I created a New-Object with the .NET Security Identifier Class Provider, and I specified the $LastUser.SID variable.

$UserSID = New-Object System.Security.Principal.SecurityIdentifier($LastUser.SID)

When New-Object is created with the SID value, there is a translate method that can be used to convert the SID to the Domain\samAccountName.

$User = $UserSID.Translate([System.Security.Principal.NTAccount])

Instead of using Write-Host or some string-type output, I prefer to use object-based output. The following code snippet shows the four pieces of information that I wanted to gather and return.

$UserProf = New-Object PSObject -Property @{

Computer=$Computer

User=$User

Time=$Time

CurrentlyLoggedOn=$Loaded

}

If you’ve ever created custom objects in Windows PowerShell, you know that without any special XML formatting, when you return the object, it will place the properties in an order that you may not like. To quickly remedy this, what I usually do is pipe my variable that contains the custom object to Select-Object and type the names of the properties in the order in which I want them returned.

$UserProf = $UserProf | Select-Object Computer, User, Time, CurrentlyLoggedOn

$UserProf

Now when $UserProf is returned, the following is displayed:

Image of command output

Now that we’ve taken care of any computer that has the Win32_UserProfile WMI class, beginning with Windows Vista with SP1, let’s take a look at those computers that do not have that WMI class. I started thinking about how to figure out the last person to log on, what time they logged on, and if they were currently logged on. I observed my profile as I logged on, and I noticed that the NTUSER.DAT.LOG file was immediately modified. This file is intermittently updated throughout the user’s session. The NTUSER.DAT.LOG is used for fault tolerance purposes if Windows can’t update the NTUSER.DAT file. Obviously, as soon as the user logs off, the file is no longer updated.

The If statement checks for the build number 6000 and below, meaning Windows Vista without SP1 and earlier.

If ($Build -le 6000)

{

To scan the user profile directories for the NTUSER.DAT.LOG, I am making the assumption that the Documents and Settings folder is residing on the system drive. I’m querying the system drive information from the Win32_OperatingSystem WMI class and isolating only the drive letter by using the Replace method. When we have that information, we can put the $Computer and system drive letter together and make a UNC path for scanning “Documents and Settings”.

$SysDrv = $Win32OS.SystemDrive

$SysDrv = $SysDrv.Replace(":","$")

$ProfDrv = "\\" + $Computer + "\" + $SysDrv

$ProfLoc = Join-Path -Path $ProfDrv -ChildPath "Documents and Settings"

$Profiles = Get-ChildItem -Path $ProfLoc

When we have all of the user profiles, we want to search for the NTUSER.DAT.LOG files. After we capture all the NTUSER.DAT.LOG in the $LastProf variable, we need to sort by the LastWriteTime property in descending order, and select the first one.

$LastProf = $Profiles | ForEach-Object -Process {$_.GetFiles("ntuser.dat.LOG")}

$LastProf = $LastProf | Sort-Object -Property LastWriteTime -Descending | Select-Object -First 1

We’ve isolated the most recent NTUSER.DAT.LOG, so I’m now making another assumption that the profile folder name will equal the UserName. By using the Replace method, I’m going to strip the “\\$Computer\<System Drive>$\Documents and Settings” off of the DirectoryName, which represents the full path of the user’s profile. I’m also going to grab the LastAccessTime and cast it to the $Time variable.

$UserName = $LastProf.DirectoryName.Replace("$ProfLoc","").Trim("\").ToUpper()

$Time = $LastProf.LastAccessTime

We are going to use the following code to extract the user’s SID from the access control entry of the NTUSER.DAT.LOG file.

$Sddl = $LastProf.GetAccessControl().Sddl

$Sddl = $Sddl.split("(") | Select-String -Pattern "[0-9]\)$" | Select-Object -First 1

Here we are formatting the SID, and assuming the sixth entry will be the user’s SID.

$Sddl = $Sddl.ToString().Split(";")[5].Trim(")")

The following code is used to convert the $UserName variable to the SID to detect if the profile is loaded via the remote registry and to compare the SID queried from the NTUSER.DAT.LOG file.

$TranSID = New-Object System.Security.Principal.NTAccount($UserName)

$UserSID = $TranSID.Translate([System.Security.Principal.SecurityIdentifier])

I felt it was necessary to compare the SID queried from the NTUSER.DAT.LOG file and the UserName extracted from the profile path, to ensure that the correct information is being returned.

If ($Sddl -eq $UserSID)

{

If the SIDs are equal, I’m going to open the HK_USERS hive and set the $Loaded variable to True if SubKeys contains the SID and to False if it isn’t present. If the user’s SID is present in the HK_USERS hive, the user is currently logged on.

$Reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey([Microsoft.Win32.RegistryHive]"Users",$Computer)

$Loaded = $Reg.GetSubKeyNames() -contains $UserSID.Value

Because I have the UserName and no DomainName, I’m going to convert the SID to Account so that it will return in the DOMAIN\USER format.

$UserSID = New-Object System.Security.Principal.SecurityIdentifier($UserSID)

$User = $UserSID.Translate([System.Security.Principal.NTAccount])

}#End If ($Sddl -eq $UserSID)

If the SIDs are not equal, I will set $User to the profile folder name and set $Loaded to “Unknown” because I could not determine if the SID was 100% accurate.

Else

{

$User = $UserName

$Loaded = "Unknown"

}#End Else

Here I am creating and formatting the custom object, like we discussed earlier for the Windows Vista with SP1 and later script block.

#Creating the PSObject UserProf

$UserProf = New-Object PSObject -Property @{

Computer=$Computer

User=$User

Time=$Time

CurrentlyLoggedOn=$Loaded

}

$UserProf = $UserProf | Select-Object Computer, User, Time, CurrentlyLoggedOn

$UserProf

I setup this function to accept piped input for the ComputerName parameter. It will also accept an array of ComputerNames. So when we run Get-Lastlogon, we’ll be able to determine what workstations haven’t been used in a while, as shown in the following image.

Image of command output

~Brian

Thank you Brian, this is a most useful and interesting script. The complete script can be found at the Script Center Repository.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

The Easy Way to Use PowerShell to Work with Special Folders

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson shows the easy way to use Windows PowerShell to work with the paths to special folders.

Hey, Scripting Guy! Question Hey, Scripting Guy! I notice that in lots of your sample scripts, you often use a folder named FSO, and it appears off of your C: drive. Why do you do this? Is creating an FSO folder really a best practice?

—EV

Hey, Scripting Guy! Answer Hello EV,

Microsoft Scripting Guy, Ed Wilson, is here. Last week was an awesome week. The Scripting Wife and I went to two different User Group meetings, and I had several great conference calls. Yep, things were pretty exciting down here in Charlotte, North Carolina in the United States.

One of the cool things about Windows PowerShell 2.0 is that it is nearly 100 percent backward compatible with Windows PowerShell 1.0. This means that things you learned in the first version continue to be useful. In addition, this means that blogs written for earlier versions and scripts written for earlier versions continue to be useful. The one major consideration is that the older materials might not take advantage of newer capabilities. In some cases, this is a non-issue, but for other things, this could be a problem.

EV, the reason I use a folder named FSO goes back a long time ago—a really long time ago. The C:\FSO folder is my scratch folder. It is a folder where I put temporary information, but it is more than simply the TEMP folder because it is a temporary folder I use only for scripts. Therefore, the c:\fso folder is more like my temporary script folder. I put both scripts and output in that folder. The reason it is off the root directory is so that it is easily accessible, and so the path to the folder does not consume a lot of space. This is also the reason for the name FSO, which you can tell is short. I have used a folder named FSO as my scripting temp folder for years. In fact the name FSO itself is a shortcut name that stands for FileSystemObject, which is an object that was often used in VBScript script (but it can be used in other languages, such as in Windows PowerShell) to read and to write to files.

Now, if I did not use a C:\FSO folder to hold files that I use for temporary script input and output, what else could I use? Well, for one thing, I could use a folder that I know will always be available. Luckily, in Windows, there are lots of these types of folders. In fact, they are called special folders. One of the problems with using these special folders is that they have really long paths.

A couple of years ago, during the wrap up for the 2010 Scripting Games, I wrote a blog titled How Can I Query the Contents of a Special Folder on a Windows 7 Computer in which I talked about using the Shell.Application object to list the enumerations and the values of the special folders. It is an excellent blog and you should read it after you read today’s blog.

One easy way to work with special folders in Windows PowerShell is to use the System.Environment.SpecialFolder enumeration. In Windows PowerShell you can easily obtain the path to any special folder by using the GetFolderPath static method from the System.Environment .NET Framework class. This is easier to do than it sounds. Here is the code to obtain the path to the mydocuments special folder.

[environment]::getfolderpath("mydocuments")

You can use this value directly. For example, to get a listing of all the files in the mydocuments special folder, use the code that is shown here.

Get-ChildItem ([environment]::getfolderpath("mydocuments"))

If you decide you want to use the mydocuments folder for your scratch directory, you should consider adding it to your Windows PowerShell profile—both your Windows PowerShell console profile and your Windows PowerShell ISE profile. You should create a short variable name to reference this location. Here is an example of a command that you could add to your profile.

$mytemp = [environment]::getfolderpath("mydocuments")

You could then use it directly as shown here.

dir $mytemp

If this is something that you do decide to do, you should, perhaps, create a subfolder inside the mydocuments special folder, rather than cluttering up the folder with a bunch of scripts and temporary files. The following image shows my current mydocuments special folder. In this image, note that there are lots of folders, and lots of files (well, you cannot see all the files, but trust me, they are there).

Image of folder

There is a Windows PowerShell folder inside the mydocuments special folder. It contains a number of folders and a few profiles. This folder also holds modules. This would actually be a great place to add a new folder to hold scratch script stuff. The command that follows uses the $mytemp variable created earlier.

New-Item -ItemType directory -Path $mytemp\windowspowershell\ScriptScratch

Now, of course, there is a sort of a problem. I want the new ScriptScratch folder to be my temporary folder, and to be called $mytemp. Also a better name for the current mydocuments location might actually be to call it $mydocuments. Here are the two commands that I use to accomplish this task.

Rename-Item -Path variable:mytemp -NewName mydocuments

$mytemp = "$mydocuments\windowsPowerShell\ScriptScratch"

Now, I check the values of these two variables.

PS C:\> $mytemp

C:\Users\edwils\Documents\windowsPowerShell\ScriptScratch

PS C:\> $mydocuments

C:\Users\edwils\Documents

PS C:\>

Cool, it works. I can now add these two commands to my two Windows PowerShell profiles.

$mydocuments = [environment]::getfolderpath("mydocuments")

$mytemp = "$mydocuments\windowsPowerShell\ScriptScratch"

EV, if you work on multiple computers, and if you are not certain that the ScriptScratch folder exists, you might want to add a test to check for the existence of the folder. For that matter, the WindowsPowerShell folder does not exist in the mydocuments special folder unless specifically created. That is what my Copy-Modules script does. (The Copy-Modules script is shown in my Windows PowerShell ISE profile).

Well, that is all there is to working with scratch directories—at least for now. Join me tomorrow for Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Learn Four Ways to Use PowerShell to Create Folders

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows four ways to create folders with Windows PowerShell, and he discusses the merits of each approach.

Hey, Scripting Guy! Question Hey, Scripting Guy! I am trying to find the best way to create a new folder while using Windows PowerShell. I have seen many ways of creating folders in scripts that I have run across on the Internet. They all seem to use something different. Is there a best way to create a new folder?

—GW

Hey, Scripting Guy! AnswerHello GW,

Microsoft Scripting Guy, Ed Wilson, is here. This morning on Twitter, I saw a tweet that said “New-TimeSpan tells me that there are 41 days until the 2012 Scripting Games.” Indeed. Even though I have been busily working away on the Scripting Games, it seems awfully soon. I fired up Windows PowerShell and typed the following code:

New-TimeSpan -Start 2/21/12 -End 4/2/12

Sure enough, the returned TimeSpan object tells me that there are indeed 41 days until the 2012 Scripting Games. I decided I would like a cleaner output, so I used one of my Top Ten Favorite Windows PowerShell Tricks: group and dot. The revised code is shown here.

(New-TimeSpan -Start 2/21/12 -End 4/2/12).days

Both of these commands and the associated output are shown in the image that follows.

Image of command output

GW, you are correct, there are lots of ways to create directories, and I will show you four of them…

Method 1

It is possible to use the Directory .NET Framework class from the system.io namespace. To use the Directory class to create a new folder, use the CreateDirectory static method and supply a path that points to the location where the new folder is to reside. This technique is shown here.

[system.io.directory]::CreateDirectory("C:\test")

When the command runs, it returns a DirectoryInfo class. The command and its associated output are shown in the image that follows.

Image of command output

I do not necessarily recommend this approach, but it is available. See the Why Use .NET Framework Classes from Within PowerShell Hey, Scripting Guy! blog for more information about when to use and not to use .NET Framework classes from within Windows PowerShell.

Method 2

Another way to create a folder is to use the Scripting.FileSystemObject object from within Windows PowerShell. This is the same object that VBScript and other scripting languages use to work with the file system. It is extremely fast, and relatively easy to use. After it is created, Scripting.FilesystemObject exposes a CreateFolder method. The CreateFolder method accepts a string that represents the path to create the folder. An object returns, which contains the path and other information about the newly created folder. An example of using this object is shown here.

$fso = new-object -ComObject scripting.filesystemobject

$fso.CreateFolder("C:\test1")

This command and its associated output are shown in the following image.

Image of command output

Method 3

GW, it is also possible to use native Windows PowerShell commands to create directories. There are actually two ways to do this in Windows PowerShell. The first way is to use the New-Item cmdlet. This technique is shown here.

New-Item -Path c:\test3 -ItemType directory

The command and the output from the command are shown here.

Image of command output

Compare the output from this command with the output from the previous .NET command. The output is identical because the New-Item cmdlet and the [system.io.directory]::CreateDirectory command return a DirectoryInfo object. It is possible to shorten the New-Item command a bit by leaving out the Path parameter name, and only supplying the path as a string with the ItemType. This revised command is shown here.

New-Item c:\test4 -ItemType directory

Some might complain that in the old-fashioned command interpreter, cmd, it was easier to create a directory because all they needed to type was md–and typing md is certainly easier than typing New-Item blah blah blah anyday.

Method 4

The previous complaint leads to the fourth way to create a directory (folder) by using Windows PowerShell. This is to use the md function. The thing that is a bit confusing, is that when you use Help on the md function, it returns Help from the New-Item cmdlet—and that is not entirely correct because md uses the New-Item cmdlet, but it is not an alias for the New-Item cmdlet. The advantage of using the md function is that it already knows you are going to create a directory; and therefore, you can leave off the ItemType parameter and the argument to that parameter. Here is an example of using the md function.

md c:\test5

The command and its associated output are shown here.

Image of command output

You can see from the image above that the md function also returns a DirectoryInfo object. To me, the md function is absolutely the easiest way to create a new folder in Windows PowerShell. Is it the best way to create a new folder? Well, it all depends on your criteria for best. For a discussion of THAT topic, refer to my Reusing PowerShell Code—What is Best blog.

GW, that is all there is to creating folders. Join me tomorrow for more Windows PowerShell coolness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>