Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

The Scripting Guys Midway Through TechEd 2012

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses day three at the Scripting Guys booth.

Microsoft Scripting Guy, Ed Wilson, is here. This will need to be a quick post, but I wanted to try to help to catch you up to date with the goings-on here at Microsoft TechEd 2012 in Orlando, Florida. The morning started off with our special guest, Microsoft PowerShell MVP, Jeffery Hicks. Here is Jeffery relaxing before the doors to the exhibit hall officially opened.

Photo of Jeffrey Hicks

Jeffery, aka “Professor Script,” had tons of visitors who brought their toughest questions. People lined up and gathered around to hear his pearls-of-wisdom. As seen here, Jeffery quickly opened up a Windows PowerShell console to illustrate his points. In fact, it seems as if he soon migrated into teaching mode.

Photo at TechEd

Following Jeffery’s amazing appearance at the booth, I had another autograph session at the O’Reilly booth for my Windows PowerShell 2.0 Best Practices book. People were lined up before I even got there. Here is a picture of the line as I was approaching the booth.

Photo at TechEd

Visitors to the Scripting Guys booth continued in their efforts to draw the perfect Dr. Scripto. One guy followed his drawing with an attempt at doing the “living Dr. Scripto” imitation. I am not sure how well he succeeded. Perhaps you can judge for yourself. Does he really look like Dr. Scripto?

Photo at TechEd

Windows PowerShell PM, Travis Jones, stopped by the Scripting Guys booth to talk to some of our visitors and to have his picture taken with the Scripting Guy and the Scripting Wife. Here is Travis.

Photo at TechEd

Iit is about time for me to head to the Windows PowerShell Best Practices Birds-of-a-Feather session. I am doing this session with Don Jones, and it will awesome. So many people signed up for this session that they had to add a second session. That that will be held tomorrow with Jeffery Hicks.

This just in! Windows guru, Mark Minasi, will be at the Scripting Guys booth tomorrow at noon to sign autographs and to answer questions. If you are in the area, make sure you stop by to say, “Hi”. Mark is a super guy, and he is a lot of fun to talk to.

This also just in! Matthew Reynolds, a senior premier field engineer for Microsoft Services will be at the Scripting Guys booth tomorrow morning at 10:30. He will be here for about an hour or so to meet, greet, and answer questions. His TechEd session, “How Many Coffees Can You Drink While You PC Boots” is today, Wednesday June 13, at 5:00 PM. Matthew writes diagnostic scripts that are used by Microsoft Services with customers, he trains Microsoft Premier Support customers on Windows PowerShell usage, and he was a tech reviewer for the book PowerShell In Action.

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Use PowerShell to Add AutoCorrect Entries to Word

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to add AutoCorrect entries to Word.

Microsoft Scripting Guy, Ed Wilson, is here. Cool. Tonight is the Microsoft TechEd 2012 closing party in Orlando, Florida. This year the party is at Island of Adventure, and the Scripting Wife has been “chomping at the bit” all week—she is dying for the party. The party at TechEd 2011 in Atlanta, Georgia was one of the most fun events I have ever attended. As a trained underwater photographer, I love watching fish anyway, and the Aquarium in Atlanta is one of the best in the world. Add to that a Coca Cola museum (never heard of such a thing), and you have a formula for success. The best thing about the party was not the venue (which was great) but chance to meet and to talk to so many scripters. We have every expectation that the party tonight will be at least as great.

Adding entries to the AutoCorrect feature in Microsoft Word

One of my favorite features in Microsoft Word is the AutoCorrect feature. Although the AutoCorrect feature is seemingly intended to correct common spelling errors such as recieve (instead of receive), I routinely add new letter combinations to it, such as WPS for Windows PowerShell. This saves me a good deal of typing. Adding new entries is not too much of a hassle. I just have to go to Options from the File menu, choose Proofing, and press the AutoCorrect Options button (this is shown in the menu that follows).

Image of menu

The problem comes when I use multiple computers. I know about using Microsoft tools to migrate my Office profile, or to export my user customization settings, but this seems like using an eight pound sledge hammer to drive finishing brads into crown molding—it will work, but it is a bit excessive. In the past I ended up adding the custom AutoCorrect entries manually as I came across them in my typing. Now I use a Windows PowerShell script to add them for me.

The AddAutoCorrectEntries.ps1 script illustrates the technique of using Windows PowerShell to add a custom AutoCorrect entry to Microsoft Word. As with all scripts that use Microsoft Word automation, the script begins with creating an instance of the Word.Application COM object. To do this, use the New-Object cmdlet and the ComObject parameter. I store the returning Word.Application COM object in a variable named $word. This command is shown here.

$word = New-Object -ComObject word.application

Next, I set the Visible property of the Word.Application object to $false. This will prevent the Word application from appearing when Windows PowerShell creates the object. There is no reason to make Word visible, and from an automation standpoint, it is actually desirable to keep it from becoming visible. This line of code is shown here.

$word.visible = $false

Next I need to retrieve the entries collection from the AutoCorrect object. I store the returning collection in the $entries variable as shown here.

$entries = $word.AutoCorrect.entries

The entries collection contains a method named Add. When adding an entry to the AutoCorrect entries collection, I first supply the item to correct, and then the thing with which to correct it. Therefore, the command that is shown here detects gps, and it automatically corrects it to Get-Process. I pipe the entry object that returns from the command to Out-Null because I have no further need to do anything with it.

$entries.add("gps","Get-Process") | out-null

That is basically all there is to using Windows PowerShell to add an entry to the AutoCorrect entries collection. Because I do not open the Microsoft Word application, I need to specifically exit the Word process. I do this by using the Quit method from the Word.Application object that is stored in the $word variable. I then assign $null to the $word variable and call garbage collection. This portion of the script is shown here.

$word.Quit()

$word = $null

[gc]::collect()

[gc]::WaitForPendingFinalizers()

One thing to keep in mind is that when you close Microsoft Word and open it later, the AutoCorrect entries remain. In other words, these are permanent entries—that is, permanent until you delete them. The complete AddAutoCorrectEntries.ps1 script is shown here.

AddAutoCorrectEntries.ps1

$word = New-Object -ComObject word.application

$word.visible = $false

$entries = $word.AutoCorrect.entries

$entries.add("gps","Get-Process") | out-null

$word.Quit()

$word = $null

[gc]::collect()

[gc]::WaitForPendingFinalizers()

Microsoft Word Automation Week will continue tomorrow when I will talk about reading a text file to add several AutoCorrect entries at one time. In addition, I will show how to read that same text file to remove the AutoCorrect entries. By using this technique, it becomes possible to customize Microsoft Word AutoCorrect entries for a single session—but hey, that is tomorrow's blog. See ya then.  

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

And Then There Were No Days Left at TechEd 2012

$
0
0

Summary: It is Thursday at Microsoft TechEd 2012 in Orlando, and Microsoft Scripting Guy, Ed Wilson, lets you in on all the happenings.

Microsoft Scripting Guy, Ed Wilson, is here. Today is the last day of Microsoft TechEd 2012 in Orlando, Florida. The morning began with the Scripting Wife’s invitation to breakfast with the very first meeting of the PowerShell Chicks Virtual User group. Their slogan is “Chicks were born in shells.” In their first meeting, they all told stories about how from their disparate backgrounds, they got into Windows PowerShell and how it has changed the way they work. They plan monthly virtual meetings. The initial members live all over the country, and they are planning online meetings. If you would like to become involved with PowerShell Chicks, contact JuneB@microsoft.com. Their Twitter hashtag is #PowerShellChicks. You are also welcome to follow @JuneB_Get_Help because she will tweet meeting information. Here June B. and Gaby K. discuss new features in Server Manager.

Photo at TechEd

One thing that is really cool, is that with June hanging around the Scripting Guys booth, she got to meet the two winners of the 2012 Scripting Games. As a matter of a fact, she asked them each to write a Help_About* topic for the official Windows PowerShell documentation! WOOHOO! How cool is that?

Well, following the PowerShell Chicks breakfast, everyone gathered around the Scripting Guys and the Server & Cloud Division Information Experience booth for a group photo. Here the away team gets their PowerShell on.

Photo at TechEd

Following our group picture, Microsoft MVP, Sean Kearney (aka BatchMan) showed up at the booth. He was looking for evil productivity-sucking tasks to eliminate with simple Windows PowerShell one liners.

Photo of Sean Kearney

Today is going to be a really busy day at the Scripting Guys booth. The booth area opens at 10:30 AM, and we will be joined by Microsoft PFE, Matthew Reynolds, in addition to Microsoft MVP, Clint Huffman. At noon, Mark Minasi will be joining us to sign autographs and to answer questions. Then I have my second Birds-of-a-Feather session. This time, I am cohosting with Microsoft MVP, Jeffrey Hicks. If you are at TechEd, we will be in room S-319. Yesterday’s session was sold out, with people standing in the hallway to listen in. So you will want to try to get there early to get a good seat.

Finally, TechEd 2012 concludes today with the conference party. The Scripting Wife and I will both be there, so make sure you come and hang out. It will be awesome.

Photo at TechEd

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Add Bulk AutoCorrect Entries to Word

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell and a CSV file to add bulk AutoCorrect entries to Microsoft Word.

Microsoft Scripting Guy, Ed Wilson, is here. Microsoft TechEd 2012 in Orlando, Florida is over—well, basically over. Luckily, the Scripting Wife and I were invited by one of the Windows PowerShell program managers to attend a post-event training session about Windows PowerShell 3.0. “Cool,” the Scripting Wife said, “For that, we will skip going to see a six-foot mouse (after all, it is still a rodent).” After the Windows PowerShell 3.0 session, we hop in the car and head to Jacksonville, Florida for the IT Pro Camp day-long event on Saturday. This is one of those “you don’t want to miss it” type of events. There are still a few tickets available, so if you are anywhere near the northeastern portion of Florida, you should check it out.

Note   Todays blog is basically Part 2 about adding AutoCorrect entries to Microsoft Word. For Part 1, see yesterday’s Hey, Scripting Guy! blog, Use PowerShell to Add AutoCorrect Entries to Word.

Adding bulk AutoCorrect entries to Word

There is not much difference between adding bulk AutoCorrect entries or adding a single entry to the AutoCorrect feature. The same “overhead” applies associated with creating the Word.Application object and releasing the Word.Application object. The big difference is that entering a single entry from a command line or hardcoded into the script is a workable solution. With more than two or three entries, such a technique no longer remains viable. Therefore, storage of the bulk entries is a paramount design consideration for a script of this type. Because I might want to remove my bulk entries, I decided to write a single function to add or to delete bulk AutoCorrect entries.

Use a CSV file for storage

Perhaps the easiest way to add bulk AutoCorrect entries to Microsoft Word is to use a comma separated value (CSV) file to store the entries. Of course, XML is also a possibility, as would be an Access database, or a Microsoft Excel spreadsheet. But like I said, I am looking for the easiest storage, and there is nothing wrong with a CSV file, and Windows PowerShell makes working with CSV files very easy. To read a CSV file use the Import-CSV cmdlet and specify a path to the CSV file. For my CSV file, I used the same column headings that Microsoft Word uses in the graphical interface. Replace is the word to replace, and with is the word that makes the substitution. Part of the real power of this methodology is that I can substitute a few letters such as lol with a phrase such as laugh-out-loud. This can greatly reduce your typing requirements. My sample CSV file is shown here.

Image of file

The beginning portion of my Set-AutoCorrectEntries function defines parameters for the path to the CSV file, and switched parameters that determine whether to add or to remove the entries from Microsoft Word. This portion of the function is shown here.

Function Set-AutoCorrectEntries

{

 Param(

  [string]$path,

  [switch]$add,

  [switch]$remove)

 $entry = Import-Csv -Path $path

Use the Add method to add the entries

Next, I create the Word.Application object, do not make it visible, and obtain the AutoCorrect.entries object. This portion of the script is shown here.

$word = New-Object -ComObject word.application

 $word.visible = $false

 $entries = $word.AutoCorrect.entries

Now I determine if the function is to add or to delete the entries. If add, I use the ForEach language statement to walk through the collection of entries in the $entry variable (the result of importing the CSV file). I now use Try / Catch to provide basic error handling. In the Try scriptblock, I attempt to add the item to the entries collection from the AutoCorrect object. I use the Catch scriptblock, to catch any errors that arise and display the name of the replacement item that fails to add. This portion of the script is shown here.

if($add)

  {

   Foreach($e in $entry)

    {

     Try

       { $entries.add($e.replace, $e.with) | out-null }

     Catch [system.exception]

       { "unable to add $($e.replace)" }

     } #end foreach

   }#end if add

Use the Delete method to remove entries from AutoCorrect

It takes a while to delete all the entries from the AutoCorrect entries object in Word. Therefore, I decide to use the Write-Progress cmdlet to produce a progress bar that indicates the status of the delete operation. I only add this progress bar to the Remove portion of the function because the Add portion completes quickly. First, I need to initialize the $j variable (used as a counter) to ensure that the correct percentage completion displays. Next, I use the ForEach language statement to walk through the entries stored in the $entry variable. I next increment the $j variable, and call the Write-Progress cmdlet. When this portion of the script runs, the dialog box shown here displays (if the script runs from within the Windows PowerShell ISE).

Image of dialog box

The portion of the script that begins the Remove operation and displays the progress bar is shown here.

if($remove)

   { $j = 0

    Foreach($e in $entry)

     { $j = $j+1

      Write-Progress -Activity "deleting entries" -Status "deleting $($e.replace)" `

      -percentcomplete ($j/$entry.count*100)

To delete the entries in the AutoCorrect entries requires using the Delete method from the entries collection. To call this method, it is necessary to match an entry from the $entry collection that is created by reading the CSV file with an entry in the entries collection. This requires walking through the CSV file contents and the entries in the entries collection. When a match occurs, I call the Delete method. This portion of the script is shown here.

    foreach($i in $entries)

       {

        if($i.name -eq $e.replace)

         { $i.delete() } }

     } #end foreach entry

   } #end if remove

The last thing to do is to close word and clean up. This code appears here.

$word.Quit()

 $word = $null

 [gc]::collect()

 [gc]::WaitForPendingFinalizers()

} #End function Set-AutoCorrectEntries

The complete AddRemoveAutoCorrectEntries.ps1 script appears here.

AddRemoveAutoCorrectEntries.ps1

Function Set-AutoCorrectEntries

{

 Param(

  [string]$path,

  [switch]$add,

  [switch]$remove)

 $entry = Import-Csv -Path $path

 $word = New-Object -ComObject word.application

 $word.visible = $false

 $entries = $word.AutoCorrect.entries

 if($add)

  {

   Foreach($e in $entry)

    {

     Try

       { $entries.add($e.replace, $e.with) | out-null }

     Catch [system.exception]

       { "unable to add $($e.replace)" }

     } #end foreach

   }#end if add

  if($remove)

   { $j = 0

    Foreach($e in $entry)

     { $j = $j+1

      Write-Progress -Activity "deleting entries" -Status "deleting $($e.replace)" `

      -percentcomplete ($j/$entry.count*100)

      foreach($i in $entries)

       {

        if($i.name -eq $e.replace)

         { $i.delete() } }

     } #end foreach entry

   } #end if remove

 $word.Quit()

 $word = $null

 [gc]::collect()

 [gc]::WaitForPendingFinalizers()

} #End function Set-AutoCorrectEntries

One thing to keep in mind, entries added to the AutoCorrect entries collection remain after closing Microsoft Word. But if Microsoft Word remains open during the process of adding or removing entries, the AutoCorrect entries do not update until after closing and reopening Microsoft Word.

Well, that is about all there is to using a CSV file to add or to delete entries for the AutoCorrect feature in Microsoft Word. This also concludes Microsoft Word Automation Week. Join me tomorrow for more cool Windows PowerShell stuff as I create a function to copy stuff from one Windows PowerShell ISE tab to a new one. This function is so cool that I added it to my Windows PowerShell ISE profile.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Weekend Scripter: Copy Text from One Tab in the PowerShell ISE to the Next

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, creates a function to copy script text from one Windows PowerShell ISE tab to a new one.

Microsoft Scripting Guy, Ed Wilson, is here. Well, the day is finally here. It is time for the Jacksonville, Florida IT Pro Camp. The speaker dinner last night was great, and the Scripting Wife and I made new friends. Jacksonville, Florida is a great town, although it has been more than 20 years since I used to live here. Interestingly enough, my favorite independent bookstore still exists, and in fact, it is thriving. The Scripting Wife and I spent several hours there yesterday just before the speaker dinner.

Creating a copy script from one tab to the other function

One of the cool things about the Windows PowerShell ISE is the object model that permits easy modification and extension of functionality. When I am working on a script, I often copy a portion of code to a new tab in the Windows PowerShell ISE so I can isolate and fix a particular issue. Another reason I find myself copying code from one tab to a new tab in the Windows PowerShell ISE is because I am extending the functionality of a script and I do not want mess up my original script. (Of course, a real source control program solves this particular problem).

Now, to be honest, it is not very difficult to copy code form one tab to another tab. I just use Control+A (ctrl-A) to highlight all the code, and Control+V (ctrl-V) to paste it in the new script tab after I have used Control+N (ctrl-N) to create a new script tab. But let’s see…that is at least six key strokes, and I have to think about it, and remember three different key stroke combinations. If I want to copy a selection of script instead of the entire script, it is even more work. The Copy-ScriptToNewTab function makes this a bit easier by creating a function that will copy an entire script from the current script pane to a new script pane (tab).

A switched parameter also permits copying only the selected text to the new script pane (tab). I add an alias (cs) to make it easier to use this function. To ensure that the function is always available, I add it to my Windows PowerShell ISE profile. I also add the cs alias into my Windows PowerShell ISE profile. I used the Add-HeaderToScript function from my Windows PowerShell ISE profile to add a header to the script file, and also the Add-Help function to add comment-based Help to the Copy-ScriptToNewTab function. The parameter portion of the script is simple, and it is shown here.

Param([switch]$selection)

Next I check to see if the Selection switched parameter exists. If it does, I first create a new tab in the Windows PowerShell ISE. Then I set the text for the new tab to equal the selected text. Here is the code that accomplishes these two tasks.

if($selection)

   { $newtab = $psISE.CurrentPowerShellTab.Files.Add()

     $newtab.Editor.Text = $psise.CurrentFile.Editor.SelectedText }

If the Selection switched parameter does not exit, I copy the entire script text. Note that there is only a single word difference between the two commands. The command to add a new tab to the Windows PowerShell ISE calls the Add method from the Files object from the CurrentPowershellTab. If I want only selected text, I use the SelectedText property. If I want the entire script text from the CurrentFileEditor object, I use the Text property. This portion of the function is shown here.

ELSE

   { $newtab = $psISE.CurrentPowerShellTab.Files.Add()

     $newtab.Editor.Text = $psISE.CurrentFile.Editor.text }

That is it really. There are only two basic lines. The remainder of the function is a little bit of structure, and of course the comment-based Help. I do not really need the comment-based Help for myself—it is a pretty basic function. But I thought I would add it because I am sharing the function. Besides, with the Add-Help function, it takes me less than a minute to add comment-based Help. Use of the function is shown in the following image.

Image of function

The complete function is shown here.

Copy-ScriptToNewTab

Function Copy-ScriptToNewTab

{

  <#

   .Synopsis

    This does that

   .Description

    This function does

   .Example

    Example-

    Example- accomplishes

   .Parameter

    The parameter

   .Notes

    NAME:  Example-

    AUTHOR: ed wilson, msft

    LASTEDIT: 06/10/2012 10:00:40

    KEYWORDS:

    HSG:

   .Link

     Http://www.ScriptingGuys.com

 #Requires -Version 2.0

 #>

 Param([switch]$selection)

  if($selection)

   { $newtab = $psISE.CurrentPowerShellTab.Files.Add()

     $newtab.Editor.Text = $psise.CurrentFile.Editor.SelectedText }

  ELSE

   { $newtab = $psISE.CurrentPowerShellTab.Files.Add()

     $newtab.Editor.Text = $psISE.CurrentFile.Editor.text }

} #end function Copy-ScriptToNewTab

Like I said, to make using the function easier, I added it to my Windows PowerShell ISE profile, and I created an alias to it. That’s about it. Join me tomorrow for more Windows PowerShell cool stuff. And if you happen to be in Jacksonville at the IT Pro Camp, come up and say, “Hi.” We hope to see you.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Automatically Indent Your PowerShell Code in the ISE

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shows how to create a function to indent your code in the Windows PowerShell ISE without tabbing.

Microsoft Scripting Guy, Ed Wilson, is here. One of the cool things about a road trip is that it gives me a chance to catch up on stuff I want to do. Luckily, the Scripting Wife enjoys driving, and I enjoying writing Windows PowerShell code while she is driving, so it works great (way better than me trying to drive from the passenger’s seat).

One of the things I have wanted to do for a long time is write a function that will automatically indent selected code inside the Windows PowerShell ISE. Of course, I could highlight the code and press Tab. The problem with that is that I get an invisible tab character (“`t”) in my code. For most people, this is a non-issue, but some of the applications that I use (for writing books, for example) do not like Tabs in code blocks. Some script editors have an option where you can set Use spaces instead of tabs, but the Windows PowerShell ISE does not have this feature—until now. And, while I was at it, I decided to do something that even other script editors don’t do, and that is, I made it variable. So I can specify indented code for one section of 3 spaces—and later on, I can indent another section 6 spaces—and later, another section, 9 spaces—or whatever I want to do.

The Move-Text function

The Move-Text function accepts a single input parameter; that is, an integer for how many spaces to indent the selected code. I set a default value of 1, mostly to avoid errors, but also because most of the time, I only indent my code a single space. So this saves me time. If you would like to make the default 2 or 3, you can edit this line:

Param([int]$space = 1)

The next thing I need to do is to create the space that I will use for my “tab” stop. To do this, I multiply a blank space by the number defined in the $space variable. This line of code is shown here.

$tab = " " * $space

Next, I pick up the selected text and assign it to the $text variable as shown here.

$text = $psISE.CurrentFile.editor.selectedText

Now I use the ForEach statement to walk through the selected text. I need to split the selected text on the Newline character so that I have lines instead of characters of selected text. This is shown here.

foreach ($l in $text -split [environment]::newline)

Inside the script block, I create new text and assign it to the $newtext variable. I take the spaces stored in the $tab variable and add the line of code to it. I then append a Newline character to the selected code. This is shown here.

{

   $newText += "{0}{1}" -f ($tab + $l),[environment]::newline

  }

Finally, I use the InsertText method to insert the newly created text with the indented code into the script editor. This code accomplishes that task.

$psISE.CurrentFile.Editor.InsertText($newText)

The complete Move-Text function is shown here.

Move-Text Function

function move-text

{

  <#

   .Synopsis

    This function will indent text in the ISE a specific number

   .Description

    This function will indent selected text in the PowerShell ISE. These are

    real spaces, not tabs. Therefore this is appropriate for situations where

    an actual tab "`t" will not work.

   .Example

    move-text -space 5

    moves selected text five spaces

   .Parameter spaces

    The number of spaces to indent the selected text. Note this number cannot

    be a negative number, and this function does not "unindent" the selected text.

   .Notes

    NAME:  Move-text

    AUTHOR: ed wilson, msft

    LASTEDIT: 06/11/2012 17:16:29

    KEYWORDS: Windows PowerShell ISE, Scripting Techniques

    HSG:

   .Link

     Http://www.ScriptingGuys.com

 #Requires -Version 2.0

 #>

 Param([int]$space=1)

 $tab = " " * $space

 $text = $psISE.CurrentFile.editor.selectedText

foreach ($l in $text -split [environment]::newline)

  {

   $newText += "{0}{1}" -f ($tab + $l),[environment]::newline

  }

   $psISE.CurrentFile.Editor.InsertText($newText)

} #end function move-text

The following image illustrates calling the Move-Text function, and it shows indenting the selected text.

Image of command output

Well, that is about it. I added the function to my Windows PowerShell ISE profile module. Now I can easily indent my script without the requirement of incorporating tabs into my code.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

The Top Ten PowerShell Best Practices for IT Pros

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, summarizes the Windows PowerShell Best Practices Talk from Microsoft TechEd 2012.

Microsoft Scripting Guy, Ed Wilson, is here. Wow, what an exciting week the Scripting Wife and I had last week. We began with Microsoft TechEd 2012 in Orlando, and we concluded the week with the IT Pro Camp in Jacksonville, Florida. During that 7-day period, we talked to literally thousands of people who are actively using Windows PowerShell, or are in the process of learning Windows PowerShell. I presented two talks at TechEd about Windows PowerShell Best Practices; one with Microsoft Windows PowerShell MVP, Don Jones, and one with Microsoft Windows PowerShell MVP, Jeffery Hicks. I thought that today, I would provide a summary of those two talks.

1. Read all of the Help. Windows PowerShell has very sophisticated Help for cmdlets and for concepts. By default, the Get-Help cmdlet does not return all of the available information. Although this works well for many situations, you must use the –full switched parameter to see information about which parameters accept pipelined input or wild cards, or to find information about default parameters. The following command illustrates this technique

Get-Help Get-Process –full | more

2. In a script, always use full parameter names. Whether at the Windows PowerShell console or in the Windows PowerShell ISE, you only need to use enough of the parameter to disambiguate it from other available parameters. It is a best practice to use the complete parameter name. This will proof your script in the future from possible new conflicting parameters that are introduced in new versions of Windows PowerShell.

3. In a script, never rely on positional parameters. Windows PowerShell cmdlets often define position numbers for parameters. An example of this is Copy-Item, which uses -path in position 1, and –destination in position 2. This makes the cmdlets very difficult to read—and worse, it makes it difficult to understand what the cmdlet actually accomplishes. For example, with Copy-Item, both parameters use a path string, and therefore the syntax is something that must be memorized.

4. Do not use Write-Host. One of the great features of Windows PowerShell is that it is object oriented. This means that cmdlets return objects, such as Get-Process, which returns an instance of the System.Diagnostics.Process object. The great thing about objects is that they have lots of methods and properties. In Windows PowerShell, these objects flow along the pipeline. Using Write-Host interrupts the pipeline, and destroys the object. There are times to use Write-Host, such as producing status messages in different colors, but do not use Write-Host to simply write textual output. Instead, directly display the contents of variables, and write your strings directly.

5. Save Format* cmdlets until the end of the command line. Similar to the previous best practice, the Format* cmdlets (such as Format-Table, Format-List, and Format-Wide) change the object (from, for example, a System.Diagnostic.Process object) to a series of Format* objects. At this point, you are done. You can do nothing else with your pipeline.

6. Do not use Return. Functions automatically return output to the calling process. In fact, it is best if you configure your functions so that they return objects. In this way, you enable the user to utilize your functions just like Windows PowerShell cmdlets.

7. Filter on the left. It is more efficient to filter returning data as close to the source of data as possible. For example, you do not want to return the entire contents of the system event log across the network to your work station, and then filter events for a specific event ID. Instead, you want to filter the system event log on the server, and then return the data.

The following command filters the system event log on a computer named RemoteServer for events with the event id of 1000. This illustrates filtering to the left.

Get-EventLog -LogName system -InstanceId 1000 –computername RemoteServer | sort timewritten

The following command illustrates filtering on the right by using the Where-Object. This command returns all of the events from the system event log across the wire, and then it filters. This is much less efficient.

Get-EventLog -LogName system –computername remoteserver | where { $_.instanceID -eq 1000 } | sort timewritten

8. Pipe to the right. When writing code in the Windows PowerShell ISE, you want to format the code so that it is easy to read. This means avoiding really really long lines of code. The best way to break up your lines of code into readable chunks is to break on the right-hand side. The following code illustrates this.

Get-EventLog -LogName system -InstanceId 1000 –computername RemoteServer |

Sort-Object timewritten

9. Use –whatif. The –whatif switch is a great way to see what a command will accomplish prior to actually executing the command. You should always use this switch when a command will change the system state. For example, the following command informs me that it will stop every process on my system.

Get-Process | stop-process -whatif

10. Steal from the best, write the rest. Many scripts have already been written for Windows PowerShell. The Scripting Guys Script Repository has thousands of scripts for many different topics. There is absolutely no reason to rewrite a perfectly good script that is already written. In addition, you might find a script that does nearly what you want, and all you need to do is make a few minor changes.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Find and Change Read-Only Files

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to find read-only Microsoft Excel files and to change them to read-write.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a problem that hopefully you can resolve. We recently restored a bunch of files from tape, and for some reason they all became marked as read-only. Our Help Desk has been inundated with calls from people who open a Microsoft Excel spreadsheet, make a bunch of changes to it, and then are prompted to save it with a different name. This is a HUGE problem because we have hundreds of spreadsheets that are saved in different places and used on a daily basis by dozens of different departments. For example, we have one spreadsheet with lots of special formulas that the production team updates at night, and the plant manager uses each morning to calculate our cost-per-unit for our plant report to corporate headquarters. His spreadsheet uses input from all the departments to make the calculations. If one spreadsheet changes its name or its path, the whole thing breaks. I simply must figure out a way to find all the Microsoft Excel spreadsheets and change them from being read-only. It might be my job otherwise.

—RS

Hey, Scripting Guy! Answer Hello RS,

Microsoft Scripting Guy, Ed Wilson, is here. Well, it always seems to take longer to get back into the swing of things after being out of the office. If I spend three days away, it seems to take me five days to get back on track. I did not even look at any scripter@microsoft.com email this past week while I was at Microsoft TechEd in Orlando. So now, I am trying to catch up on those messages. RS, I am sorry you are having such a terrible time. I can certainly help you get things back on line, but for a long-term solution you should really be looking at SharePoint because it is designed to do what you have sort of cobbled together. As you have seen, linking spreadsheets like that can be a bit fragile.

The first thing you need to do is to find all of the Microsoft Excel files in your system. To do this, you will use the Get-ChildItem cmdlet and use the Include parameter to allow you to search for all XLS and XLSX types of files. You will also need to use the Recurse switched parameter to search through the folder. The following command uses the GCI alias for the Get-ChildItem cmdlet.

gci -Include *.xls, *.xlsx -Recurse

If you change your working directory to the location that contains the files, you will not need a Path parameter. This technique is shown here.

Image of command output

Because you more than likely have multiple directories to search, you can supply them to the cmdlet as an array. To do this, use the ­Path parameter as follows.

gci -Include *.xls, *.xlsx -Recurse -Path c:\test, c:\fso

The command and the output associated with the command are shown here.

Image of command output

To determine if a file is read-only, you check the IsReadOnly property. The following command finds all of the Microsoft Excel documents in multiple folders, returns the complete path, and tells whether the file is read-only.

gci -Include *.xls, *.xlsx -Recurse -Path c:\test, c:\fso | select fullname,isreadonly

The command and the output associated with the command are shown here.

Image of command output

You now have two choices. First, you can determine if the file is read-only. If it is, you can set it to read-write. Or, the easier way, you can simply make all of the files in the folder read-write. The net result is the same. Obviously, the second option is the easier code to write. For the remaining examples, I will only use the C:\test directory. In the following example, I use the % alias for the Foreach-Object cmdlet. I use the If statement to determine if the file is read-only. If it is, I change it to read-write by setting the IsReadOnly property to $false. The command is shown here.

gci -Include *.xls, *.xlsx -Recurse | % { if($_.IsReadOnly){$_.IsReadOnly= $false} }

The command and the output associated with the command are shown here.

Image of command output

It is much easier to simply change all the Microsoft Excel documents from read-only. Because I am in my test folder, I first change everything to read-only by using the following command.

gci -Include *.xls, *.xlsx -Recurse | % { $_.isreadonly = $true }

I then change them back by setting the IsReadOnly property on each file to $false as shown here.

gci -Include *.xls, *.xlsx -Recurse | % { $_.isreadonly = $false }

The following image shows the commands and the associated output.

Image of command output

RS, that is all there is to changing read-only files with Windows PowerShell. Join me tomorrow when I will talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


The Easy Way to Monitor for an IP Address by Using PowerShell

$
0
0

Summary: The Microsoft Scripting Guy, Ed Wilson, shows how to use Windows PowerShell to monitor for acquiring an IP address.

Hey, Scripting Guy! Question Hey, Scripting Guy! We have a problem. It seems that when people with laptops come into the office, it takes forever for them to obtain network access. On the tool bar, a blue circle of death spins in an infinite loop. When I hover over the circle, it says “identifying network.” There is nothing to identify—it is our corporate network. I recently figured out that the circle is lying to me. It is not really trying to identify the network; but rather, it is waiting on an Internet Protocol (IP) address. I know you can use a ping command to ping forever, but I can never remember the syntax. In addition, it does not really tell me what I want to know. What I really want to know is if the computer has obtained an IP address. Can I use Windows PowerShell to do this?

—MH

Hey, Scripting Guy! Answer Hello MH,

Microsoft Scripting Guy, Ed Wilson, is here. Often when one thinks about monitoring, one turns ones attention to Windows Management Instrumentation (WMI) events. In fact, I recently published Insider’s Guide to Using WMI Events and PowerShell that listed Hey, Scripting Guy! resources for working with this powerful and cool technology. But at times, such an approach is a bit like using a steamroller to make hamburger patties—it might work, but is not necessarily the easiest way to do things.  

MH, one easy way to monitor for acquisition of an Internet Protocol (IP) address is to use the range operator and pair it with the ForEach-Object cmdlet. In fact, this technique is one of my top ten favorite Windows PowerShell tricks because it is so flexible and so powerful. My approach here is to do something really easy, really quick, and with minimal typing. My approach is not efficient, elegant, or even “correct” (as far as Windows PowerShell purists go). I would venture there are even easier ways to do this. But the advantage here is that the command is easy to understand, easy to remember, and easy to type.

The command that follows begins by using the range operator to create an array of numbers from 1 through 500. These numbers cross the pipeline one at a time. The ForEach-Object cmdlet calls the ipconfig command once for each number. The results of ipconfig pipe to the Select-String cmdlet. Select-String displays only the line of output that contains the letters ipv4. By default, there is no alias for the Select-String cmdlet, but remember that by using Tab Expansion, you can greatly reduce your typing load. The following command illustrates how to use Select-String to retrieve only the line of text containing ipv4.

PS C:\> ipconfig | Select-String ipv4

   IPv4 Address. . . . . . . . . . . : 192.168.0.54

OK MH, so I can now find my Internet Protocol (IP) address from ipconfig by using the Select-String cmdlet. The next thing, I need to do is to wait for a couple of seconds and clear the Windows PowerShell console host. To do this, I use the Start-Sleep cmdlet (sleep is an alias) and cls (an alias for the Clear-Host function). The complete command is shown here.

1..500 | % {ipconfig | Select-String ipv4 ; sleep 2; cls }

When I run the command, for the first two seconds it displays both the command and the output of the command as shown here.

Image of command output

After the first two seconds of run time, the Windows PowerShell console clears, and only the IPv4 Address displays as shown here.

Image of command output

MH, that is all there is to using Windows PowerShell to monitor for changes in the IP address. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Describe Windows PowerShell to Four Types of Users

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, describes Windows PowerShell to four types of users—everything from IT Pros to their moms.

Hey, Scripting Guy! Question Hey, Scripting Guy! What is Windows PowerShell?

—CL

Hey, Scripting Guy! Answer Hello CL,

Microsoft Scripting Guy, Ed Wilson, is here. Last week, at Microsoft TechEd 2012 in Orlando, Florida the Scripting Wife and I talked to literally thousands of people who swung by the Scripting Guys booth, met with us for our Scripting Breakfast, or hung out with us in the evenings at various events and functions. During all that time, everyone was super enthusiastic about Windows PowerShell—so it is easy to forget that everyone has not gotten the word. In fact, at the Jacksonville IT Pro Camp the Saturday following TechEd 2012, when I asked the jam-packed room how many people were using Windows PowerShell on a regular basis, only three people raised their hands. One was the Scripting Wife, and the other two were Stephanie Peters (Microsoft PFE and Windows PowerShell guru), and Jason Hofferle (Windows PowerShell guru and active community lead) who were there to present. I guess I live a sheltered life amongst the Windows PowerShell community, protected from a world without the blue and white prompt.

Note   By the way, I sort of feel like this is my “Yes, Virginia” moment. But what I would really like to see are your answers to this question. Post a comment to the blog and share with others how you describe Windows PowerShell.

Using Windows PowerShell on a daily basis, and knowing how to describe it to someone who has never used it, is a bit hard. To an extent, it depends on the background of the person who is asking the question. Of course, one of the confusing things about Windows PowerShell is the “it’s a shell—no, it’s a scripting language” type of argument. Sort of like the peanut butter/chocolate debate.

Describe Windows PowerShell for a Windows admin

I like to tell Windows administrators who have not used Windows PowerShell that Windows PowerShell combines the ease-of-use of the command prompt, with the power and flexibility of VBScript. But that is not an old-fashioned command prompt, nor is it a complicated scripting language.

Describe Windows PowerShell for a *nix admin

I like to tell my *nix friends that Windows PowerShell combines many of the features of a Bash or a Korn shell, but with one important difference. Windows PowerShell passes objects. This means that to access output, one only needs to use dotted notation to retrieve a specific property from the object. Whereas *nix admin types need to be really good at using Regular Expressions due to parsing returned string data, Windows PowerShell users need to become good at using Get-Member to discover the properties that contain the information they desire.

Describe Windows PowerShell for a normal user

I like to tell normal Windows users that Windows PowerShell is an automation tool. It allows me to easily change many things at one time. In addition, I can keep a record of when changes were made, what those changes were, and I can even play those changes again and again. By using Windows PowerShell, I can stop multiple processes with a single command, increase the maximum number of Internet Explorer downloads, or change my screensaver timeout value. Of course, I can perform all of those tasks via the GUI, but by using Windows PowerShell, I can do all of them at once—and even more.

Describe Windows PowerShell to your mom

I don’t know about your mom, but the Scripting Mom is not the most sophisticated computer user on planet Earth. She uses her Windows 7 Home edition computer to read email, to use Facebook with her friends, and to play computer games. That is about it. So how do I describe Windows PowerShell to her? Well so far, I have basically ducked the question. But she does read my blog on a daily basis (in fact, she has it set as her home page)—so I might end up having to describe it to her. So what will I say? How about this: Windows PowerShell is a tool that lets you make several different kinds of changes on your computer from a single location. Instead of having to search around and learn how to use a dozen different tools to make as many different changes, you can use one tool to do it all. Windows PowerShell is designed to be used by IT Pros and Windows power users, although regular users can also use it.

CL, I hope this helps you to understand what Windows PowerShell really is. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Top Five PowerShell Tasks a User Might Need to Accomplish

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses the top five things a normal user might need to use Windows PowerShell to do.

Hey, Scripting Guy! Question Hey, Scripting Guy! What good is Windows PowerShell to a normal Windows User?

—GK

Hey, Scripting Guy! Answer Hello GK,

Microsoft Scripting Guy, Ed Wilson, is here. This afternoon begins to stretch me into normalcy. I completed my TechEd 2012 Trip Report, attended a couple of meetings, and made a pot of Gun Powder Green Tea with a small amount of organic hibiscus flower in it. Along with a piece of 90 percent cacao, it was a perfect afternoon. GK, as I began to catch up on my scripter@microsoft.com email, I ran across your letter. It reminds me of something the Scripting Wife says to me when I am presenting at SQL Saturday, “Tell them what good it is.” So here are the five top tasks that I like Windows PowerShell to do for me…

1. Close a bunch of copies of a program at once. It never seems to fail, I click a program, wait for a few minutes, and nothing seems to happen. Therefore, I click the icon again…and again…and again. All of a sudden, I hear the fan on my laptop kick into overdrive, I see the hard disk drive activity light go crazy, and all at once, my display becomes covered with open windows from the wayward application.
If you are like me, you know how to launch Task Manager by selecting “Start Task Manager” from the action menu of the task bar or by pressing <Ctrl><Shift><Esc> all at once. But the problem is that Task Manager does not support multiple select; and therefore, it requires multiple clicks and selections prior to deleting all copies of the wayward application. Task Manager is shown here.

Image of menu

The solution is to use the Get-Process cmdlet and the Stop-Process cmdlet. You might want to see if you can identify the number of copies of the process. To do this, use the Get-Process cmdlet as shown here.

Get-Process iexplore

Before attempting such a solution, it is a good idea to test the solution by using the –WhatIf switch as shown here.

Get-Process iexplore | Stop-Process –WhatIf

When you have found that it does what you want, use the Up arrow to recall the command and remove the –WhatIf. This is shown here.

Get-Process iexplore | Stop-Process

No output displays from the command. The previous commands and the output associated with them are shown in the image that follows.

Image of command output

2. Find the date for a month from now, or at some time in the past, or any time in the future. Often I need to find what the date will be for 30 days or 60 days in the future. To do this, I use the Get-Date cmdlet. The trick is to use the adddays method. To do this, I use the Get-Date cmdlet, put parentheses around it, use a dot, and call the adddays method. Luckily, adddays accepts negative numbers, which is great for going back in time. This technique is shown here.

PS C:\> (Get-Date).adddays(30) 

Thursday, July 19, 2012 4:34:40 PM 

PS C:\> (Get-Date).adddays(-30)

Sunday, May 20, 2012 4:34:46 PM 

PS C:\>

3. Find the most recent entry in the Application log. One of the cool things about Windows 7 is the new event logs. One of the bad things about Windows 7 is all of the new logs. It takes a bit of time for the Event Viewer to open to allow you to select a specific log and look at it. By using Windows PowerShell, that task is a breeze. Want to? No problem. It is a single command, as shown here.

Get-EventLog application -Newest 1

Or how about finding all of the entries from the system log that occurred after midnight on June 19, 2012? This, too, is an easy task. The command is shown here. 

Get-EventLog system  -After 6/19/2012 

4. Find a list of all the services on your computer that are running or stopped. By using Windows PowerShell, this is a piece-of-cake.

Get-Service | where {$_.status -eq 'running'}

Get-Service | where {$_.status -eq 'stopped'}

In Windows PowerShell 3.0, the command is easier to read because of the new simplified syntax. The revised command is shown here.

Get-Service | where status -eq 'stopped'

Get-Service | where status -eq 'running'

5. Shut down my computer. When I am finished for the day, I use Windows PowerShell to shut down my computer. The command, Stop-Computer, is easy-to-use as shown here.

Stop-Computer

I can also use this to shut down multiple computers from across the network because the –ComputerName parameter accepts an array of computer names. The following command illustrates this technique. This is also a good command to use the –WhatIf parameter with.

PS C:\> Stop-Computer -ComputerName dc1,dc3 -WhatIf

What if: Performing operation "Stop-Computer" on Target " (dc1)".

What if: Performing operation "Stop-Computer" on Target " (dc3)".

PS C:\>

When you are certain you want to perform the action, use the Up arrow and remove the –WhatIf parameter. This command is shown here.

Stop-Computer -ComputerName dc1,dc3

GK, those are my five favorite tasks to accomplish with Windows PowerShell. No doubt others will have other suggestions and post them as comments to the posting. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: Scripting Wife Discusses Setting Up a PowerShell User Group

$
0
0

Summary: Guest blogger, the Scripting Wife, discusses how to set up a Windows PowerShell User Group in your area.

Microsoft Scripting Guy, Ed Wilson, is here. Today I am happy to say that there is a very special guest blogger. That is right. The Scripting Wife has sent in a blog post to share. This blog grew out of a common question that was asked at the Scripting Guys booth at Microsoft TechEd 2012 in Orlando, Florida, “How can I set up a Windows PowerShell user group?” Teresa became so adept at answering that question that she decided to write a guest blog to share her insights.

Teresa Wilson, aka The Scripting Wife, is well known in the Windows PowerShell community for her encouragement of new scripters and her enthusiasm for helping in any way she can. She has been a student of Windows PowerShell and used as an example in the last two Scripting Games purely to help beginners and encourage others to participate. She also helps with the administrative work in the Charlotte PowerShell User Group, with Windows PowerShell MVP, Jim Christopher as the leader. One other contribution she makes is as the booking agent for the guests on the PowerScripting Podcast with Windows PowerShell MVPs, Hal Rottenberg and Jonathan Walz.  Of course, we all know her main role is keeping me organized.

Teresa helped with at Scripting Guys booth at TechEd 2012 last week, and she has a new best friend from the closing party at Universal Studios. Here is a picture to show you her friend. (Thanks to Jason Hofferle for taking the picture.)

Photo of Scripting Wife

Note   There are two Hey, Scripting Guy! blogs that also provide relevant and useful information about setting up a Windows PowerShell User Group. The first one is, Practical Tips for Starting a PowerShell User Group. The second is Mark Schill Discusses PowerShell User Groups.

Take it away, Scripting Wife…

Hello everyone. As Ed mentioned, I am overflowing with information and with new friends who I met last week at TechEd. One of the questions I heard several times was a request for an easy-to-follow list of items to accomplish for starting a user group. Ed has published a couple of blogs in the past, but none of them are really a list that you can check off. I spent most of my working years in accounting roles, so a list makes perfect sense to me. Today I will give you the list as bulleted items, then I will provide some explanation.

Registering your group at PowerShell Community Groups will start the ball rolling in several ways. One, if you email me, scriptingwife@hotmail.com, I will make sure to let Hal and Jon know about your new group, and they will promote it on the PowerScripting Podcast. Second, when you have the information on the PowerShell Community Groups site, people in your area will see that there is a user group forming in your location, and they can start to sign up as members. This will provide you with resources that you may not have known about. For example, maybe someone works at a company that has a meeting room you can use—that solves your location item right off the bat.

Sponsors are not my strong point but I do have some ideas. Typically, local recruiters and training companies make good sponsors. Also software companies whose products are related to Windows PowerShell are great sponsors. Do not forget publishing companies other than O’Reilly, which was mentioned in my list.

Spread the word. You know your hometowns better that I do, so go out and contact the IT departments at the businesses in your area. Do not forget schools and universities. Not only will the schools have an IT department; they will also have students who may want to join to learn more.

When you have some contacts, you will want to start thinking about ideas for how to structure your group. For example, Jim Christopher is the leader of our group, but his philosophy is that it is not his user group—it is the members’ user group, and his role is the facilitator.

Our group started with the idea that we would have a speaker one month and then conduct a script club the next month, and continue alternating each month. When the Scripting Games happened in May, we had several members who participated in the games and really learned a lot. The group asked about having a mini Scripting Games at our next meeting, and that is what we did. They loved it so much that we are doing a mini games again in July. After that, we will see if that is how we continue. It is all about learning and sharing knowledge in whatever fashion your group wants to use.

I hope I have provided some useful information here in a concise manner. Please be sure to drop me a line if I can be of any further assistance. I am @scriptingwife on Twitter, and my email address is scriptingwife@hotmail.com.

Happy scripting.

~Teresa

Thank you, Scripting Wife, for your useful and informative blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Weekend Scripter: An Insider’s Guide to PowerShell Arrays

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, provides an insider’s guide to Windows PowerShell arrays.

Microsoft Scripting Guy, Ed Wilson, is here. It is a quiet weekend in Charlotte, North Carolina in the United States. I am about to get back into the swing of things following nearly two weeks on the road for our Florida Windows PowerShell Road Show. The Scripting Wife has made herself scarce the past couple of days, and I am certain she is also trying to get back into a routine. I do not have any more public appearances, other than the Charlotte Windows PowerShell User Group meeting on July 5, 2012. We will be doing another “mini Scripting Games” type of meeting. The last one was WAY COOL! If you are in the area, you should definitely check it out. Even if you have to drive a bit, it will be worth the trip.

Today, I thought I would go through the Hey, Scripting Guy! Blog archive, and list and review Hey, Scripting Guy! blogs that are related to arrays.

Windows PowerShell makes working with arrays much easier than the methodology used from other languages. As a result, arrays often receive little attention. I tried to close this gap with a few blogs. In this resource guide, I detail these blogs.

Background of arrays

Using PowerShell Get-Member to Explore the .NET Framework

OK. So this blog does not have the word array in the title. But it provides excellent background information about the System.Array .NET Framework class. Windows PowerShell arrays come from the System.Array .NET Framework class, and it is one reason I chose this class to illustrate working with Get-Member.

Learn Simple Ways to Handle Windows PowerShell Arrays

We begin with the question, “What is an Array?” then progress to discussing elements, indexes, and values. It describes the concepts of array boundaries, and examines the Count and the Length properties. The blog also discusses using the for statement and the Foreach-Object cmdlet to access members of an array. This is the first in a series of foundational blogs about arrays.

Add, Modify, Verify, and Sort your PowerShell Array

This continues the previous blog and dives into working with array elements, changing the values of elements, and adding to a previously existing array. I talk about searching an array for specific values and about two ways of sorting an array. This is a great blog, and is the second foundational post about arrays.

Find the Index Number of a Value in a PowerShell Array

In part three of the foundational posts about arrays, I talk about using the for statement to find the index number of a value in an array. Following that discussion, I talk about using the IndexOf static method from the System.Array .NET Framework class. This class was discussed in Using PowerShell Get-Member to Explore the .NET Framework, which is mentioned at the top of this section. Finally, I discuss working with only one-half of the array. This is a great blog, and you should definitely spend time mastering the techniques mentioned here.

Easily Create and Manipulate an Array of Arrays in PowerShell

This is part five in the multiple part array foundations series. In this blog, I first discuss creating an array of arrays. Next, I talk about how to access specific elements in an array of arrays. Using an array of arrays is a powerful technique—a bit advanced, but powerful nonetheless. This is a great introduction to that topic.

 

Array techniques

Speed Up Array Comparisons in PowerShell with a Runtime RegEx

An excellent blog written by Scripting Guys Forum moderator and guest blogger, Rob Campbell. This blog compares using the Contains and the NotContains operators with the Match operator and a regular expression pattern. He uses the Measure-Command cmdlet to compare the time the two operations use. This is a special application, but it is a great trick to have in your tool pouch.

Format Multilevel Arrays in PowerShell

In this blog, I discuss using Windows PowerShell to work with multilevel arrays. I talk about creating arrays, adding additional elements to arrays, and creating arrays of other arrays. This is a really cool and foundational blog.

Read a CSV File and Build Distinguished Names on the Fly by Using PowerShell

The fourth blog in the Windows PowerShell arrays foundations series begins by reading a CSV file by using the Import-CSV cmdlet. Next I pipe the CSV contents to the Foreach-Object cmdlet and build a distinguished name. It is a cool technique.

Combine Arrays and Hash Tables in PowerShell for Fun and Profit

I discuss creating an array that contains hash tables in the various elements. This is a great technique, and it is an interesting blog.

Well, that is about it for Hey, Scripting Guy! blogs about Windows PowerShell arrays. Hopefully the information will prime you with ideas and help you utilize these powerful techniques in your Windows PowerShell scripts.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Build a Query to Search the Windows Index from PowerShell

$
0
0

Summary: Guest blogger, James O'Neill, discusses using Windows PowerShell to build a query to search the Windows Index.

Microsoft Scripting Guy, Ed Wilson, is here. Today is Part One of three blogs written by guest blogger, James O’Neill.

James O'Neill was born in the 1960s and used his first Microsoft product in the 1970s (and has been trying to stop ever since.) He obtained a degree in Computer
Science in the 1980s and spent most of the 1990s running an IT training company. From 2000 to 2010 he worked for Microsoft in his native England,
finishing as the evangelist for Windows platform, where he discovered PowerShell. He’s probably best known in the PowerShell community for his
library to manage Hyper-V on Windows Server 2008/2008-R2.

.

Note   I have two Weekend Scripter blogs where I discuss querying the Windows Search Index. The first is Querying the Windows Search Index, and the second is Using the Windows Search Index to Find Specific Files. Both of these blogs use COM-based ADO to do the query instead of using the .NET Framework classes that are used by James. The blogs provide a good background for James’ series.

Take it away, James…

I have spent some time developing and honing a Windows PowerShell function that gets information from the Windows Index, which is the technology behind the search that is integrated into Windows Explorer in Windows 7 and Windows Vista. The Windows Index can be queried by using SQL, and my function builds the SQL query from user input, executes it, and receives rows of data for all the matching items.

Today, I'm going to explore the query process. Part Two will look at making user input easier (I don't want to make understanding SQL a prerequisite for using the function). In Part Three, I will look at why rows of data are not the best thing for the function to return and what the alternatives might be.

We will look at how at how the query is built in a moment. For now, please accept a ready-to-run query that is stored in the variable $SQL. Then it only takes a few lines of Windows PowerShell to prepare and run the query as shown here.

$Provider="Provider=Search.CollatorDSO;Extended Properties=’Application=Windows’;"

$adapter = new-object system.data.oledb.oleDBDataadapter -argument $sql, $Provider

$ds      = new-object system.data.dataset

if ($adapter.Fill($ds)) { $ds.Tables[0] }

The data is fetched by using the oleDBDataAdapter and DataSet objects. The adapter is created by specifying a "provider" (which says where the data will come from) and a SQL statement (which says what is being requested). The query is run when the adapter is told to fill the dataset. The .fill() method returns a number that indicates how many data rows were returned by the query. If this is non-zero, my function returns the first table in the dataset. Windows PowerShell sees each data row in the table as a separate object, and these objects have a property for each of the table's columns. So a search might return something like this:

SYSTEM.ITEMNAME                : DIVE_1771+.JPG

SYSTEM.ITEMURL                 : file:C:/Users/James/pictures/DIVE_1771+.JPG

SYSTEM.FILEEXTENSION           : .JPG

SYSTEM.FILENAME                : DIVE_1771+.JPG

SYSTEM.FILEATTRIBUTES          : 32

SYSTEM.FILEOWNER               : Inspiron\James

SYSTEM.ITEMTYPE                : .JPG

SYSTEM.ITEMTYPETEXT            : JPEG Image

SYSTEM.KINDTEXT                : Picture

SYSTEM.KIND                    : {picture}

SYSTEM.MIMETYPE                : image/jpeg

SYSTEM.SIZE                    : 971413

There are lots of fields to choose from, so the list might be longer. The SQL query to produce it looks something like this:

SELECT System.ItemName, System.ItemUrl, System.FileExtension, System.FileName, System.FileAttributes, System.FileOwner, System.ItemType, System.ItemTypeText , System.KindText, System.Kind, System.MIMEType, System.Size 

FROM SYSTEMINDEX

WHERE System.Keywords = 'portfolio' AND Contains(*,'stingray')

In the finished version of the function, the SELECT clause has 60 or so fields. The FROM and WHERE clauses might be more complicated than in the example, and an ORDER BY clause might be used to sort the data. The clauses are built by using parameters that are declared in my function like this:

Param ( [Alias("Where","Include")][String[]]$Filter ,

        [Alias("Sort")][String[]]$orderby,

        [Alias("Top")][String[]]$First,

        [String]$Path,

        [Switch]$Recurse

)

In my functions, I try to use names that are already used in Windows PowerShell. So here I use -Filter and -First, but I also define aliases for SQL terms like WHERE and TOP. These parameters build into the complete SQL statement, starting with the SELECT clause which uses –First.

if ($First)  {$SQL = "SELECT TOP $First "}

else         {$SQL = "SELECT "}

$SQL += " System.ItemName, System.ItemUrl " # and the other 58 fields

If the user specifies –First 1, $SQL will be "SELECT TOP 1 fields"; otherwise, it's just "SELECT fields." After the fields are added to $SQL, the function adds a FROM clause. Windows Search can interrogate remote computers, so if the -Path parameter is a UNC name in the form \\computerName\shareName, the SQL FROM clause becomes FROM computerName.SYSTEMINDEX; otherwise, it is FROM SYSTEMINDEX to search the local computer.
A regular expression can recognize a UNC name and pick out the computer name, like this:

if ($Path -match "\\\\([^\\]+)\\.") {

      $sql += "FROM $($matches[1]).SYSTEMINDEX WHERE "

}

else {$sql += " FROM SYSTEMINDEX WHERE "}

The regular expression in the first line of the example breaks down as follows:

Regular expression

Meaning

Application

\\\\([^\\]+)\\.

2 \ characters: "\" is the escape character, so each one needs to be written as \\

\\computerName\shareName

 

\\\\([^\\]+)\\.

Any non-\ character, repeated at least once

\\computerName\shareName

 

"\\\\([^\\]+)\\."

A \,followed by any character

\\computerName\shareName 

"\\\\([^\\]+)\\."

Capture the section that is enclosed by the brackets as a match

$matches[0] =\\computerName\s

$matches[1] =computerName

I allow the function to take different parts of the WHERE clause as a comma separated list, so that

-filter "System.Keywords = 'portfolio'","Contains(*,'stingray')"

is equivalent to 

-filter "System.Keywords = 'portfolio' AND Contains(*,'stingray')"

To add the filter, we simply need this:

if ($Filter) { $SQL += $Filter -join " AND "}

The folders searched can be restricted. A "SCOPE" term limits the query to a folder and all of its subfolders, and a "DIRECTORY" term limits it to a folder without subfolders. If the request is going to a remote server, the index is smart enough to recognize a UNC path and return only the files that are accessible via that path. If a -Path parameter is specified, the function extends the WHERE clause, and the –Recurse switch determines whether to use SCOPE or DIRECTORY, like this:

if ($Path){

   if ($Path -notmatch "\w{4}:") {

       $Path = "file:" + (resolve-path -path $Path).providerPath

   }

   if ($sql -notmatch "WHERE\s*$") {$sql += " AND " }

   if ($Recurse)                   {$sql += " SCOPE = '$Path' "    }

   else                            {$sql += " DIRECTORY = '$Path' "}

}

In these SQL statements, paths are specified in the form file:c:/users/james, which isn't how we normally write them (and the way I recognize UNC names won't work if they are written as file://ComputerName/shareName). This is rectified by the first line inside the If ($Path) {} block, which checks for 4 "word" characters, followed by a colon.

Doing this prevents 'File:' being inserted if any protocol has been specified. The same search syntax works against HTTP:// (although, not usually when searching on your workstation), MAPI:// (for Outlook items), and OneIndex14:// (for OneNote items). If a file path has been given, I ensure it is an absolute one. The need to support UNC paths forces the use of .ProviderPath here. It turns out that there is no need to convert \ characters in the path to /, provided file: is included.

After taking care of that, the operation -notmatch "WHERE\s*$" sees to it that an "AND" is added if there is anything other than spaces between WHERE and the end of the line (that is, if any conditions specified by –Filter have been inserted).

If neither -Path nor -Filter was specified, there will be a dangling WHERE at the end of the SQL statement. Initially I removed this with –Replace. Then I decided that I didn't want the function to respond to a lack of input by returning the whole index, so I changed it to write a warning and exit.

With the WHERE clause completed, the final clause in the SQL statement is ORDER BY, which, like WHERE, joins a multipart condition.

if ($sql -match "WHERE\s*$")  {

   Write-warning "You need to specify either a path, or a filter."

   Return

}

if ($orderby) { $sql += " ORDER BY " + ($OrderBy -join " , ") }

When the whole function is put together, it takes three dozen lines of Windows PowerShell to handle the parameters, build and run the query, and return the result. Put together, it looks like this:

Function Get-IndexedItem{

Param ( [Alias("Where","Include")][String[]]$Filter ,

        [Alias("Sort")][String[]]$OrderBy,

        [Alias("Top")][String[]]$First,

        [String]$Path,

        [Switch]$Recurse )

 

if ($First)  {$SQL = "SELECT TOP $First "}

else         {$SQL = "SELECT "}

$SQL += " System.ItemName, System.ItemUrl " # and the other 58 fields

 

if ($Path -match "\\\\([^\\]+)\\.") {

      $SQL += "FROM $($matches[1]).SYSTEMINDEX WHERE "

}

else {$SQL += " FROM SYSTEMINDEX WHERE "}

 

if ($Filter) { $SQL += $Filter -join " AND "}

 

if ($Path)   {

    if ($Path -notmatch "\w{4}:")  {$Path = "file:" + $Path}

    $Path = $Path -replace "\\","/"

    if ($SQL -notmatch "WHERE\s*$") {$SQL += " AND " }

    if ($Recurse)                   {$SQL += " SCOPE = '$Path' "    }

    else                            {$SQL += " DIRECTORY = '$Path' "}

}

 

if ($SQL -match "WHERE\s*$")  {

   Write-Warning "You need to specify either a path or a filter."

   Return

}

if ($OrderBy) { $SQL += " ORDER BY " + ($OrderBy   -join " , " ) }

 

$Provider="Provider=Search.CollatorDSO;Extended Properties=’Application=Windows’;"

$Adapter = New-Object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider

$DS      = New-Object system.data.dataset

if ($Adapter.Fill($DS)) { $DS.Tables[0] }

}

The -Path parameter is more user-friendly as a result of the way I handle it. But I've made it a general rule that you shouldn't expect the user to know too much about the underlying syntax; and at the moment, the function requires too much knowledge of SQL. I don't want to type this:

Get-Indexed-Item –Filter "Contains(*,'Stingray')", "System.Photo.CameraManufacturer Like 'Can%'"

And it seems unreasonable to expect anyone else to do so. I came up with this list that I want the function to do for me:

  • Don't require the user to know whether a search term is prefixed with SYSTEM (SYSTEM.DOCUMENT, SYSTEM.IMAGE or SYSTEM.PHOTO). If the prefix is omitted, add the correct one.
  • Even without the prefixes, some field names are awkward; for example, "HorizontalSize" and "VerticalSize" instead of width and height. Provide aliases.
  • Literal text in searches needs to be enclosed in single quotation marks. Insert quotation marks if the user omits them.
  • A free text search over all fields is written as Contains(*,'searchTerm'). Convert "orphan" search terms into Contains conditions.
  • SQL uses % (not *) for a wild card. Replace * with % in filters to cope with users adding the familiar *.
  • SQL requires the like predicate (not =) for wildcards. Replace = with like for wildcards.

In Part Two, I'll look at how I accomplish these things.

~James

Thank you, James, for a great blog.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Search Windows Index with PowerShell: Helping with Input

$
0
0

Summary: Guest blogger, James O'Neill, uses Windows PowerShell to help users with input for searching the Windows Index.

Microsoft Scripting Guy, Ed Wilson, is here. Today James O’Neill is back with Part Two.

Note: This is Part Two of a three part series about using Windows PowerShell to search the Windows Index. Yesterday, James talked about building a query string to search the Windows Index.

Take it away, James…

In Part One, I developed a working Windows PowerShell function to query the Windows Index. It outputs data rows, which isn't the ideal behaviour, and I'll address that in Part Three. Today, I'll address another drawback: search terms passed as parameters to the function must be "SQL-Ready." I think that makes for a bad user experience, so I am going to look at the half-dozen bits of logic that I added to allow my function to process input that is a little more human. Regular expressions are the way to recognize text that must be changed, and I'll pay particular attention to those because I know a lot of people find them daunting. Let’s address the items from yesterday’s list that I want the function to do for me…

Replace* with %

SQL statements use % for a wildcard, but selecting files at the command prompt traditionally uses *. It's a simple matter to replace. For the need to "escape" the * character, replacing * with % would be as simple as a –Replace statement gets. This command is shown here.

$Filter = $Filter -replace "\*","%"

For some reason, I am never sure if the camera maker is Canon or Cannon, so I would rather search for Can*…or rather Can%, and that replace operation will turn "CameraManufacturer=Can*" into "CameraManufacturer=Can%". It is worth noting that –Replace is just as happy to process an array of strings in $filter as it is to process one.

Searching for a term across all fields uses "CONTAINS (*,'Stingray')", and if the –Replace operation changes * to % inside CONTAINS(), the result is no longer a valid SQL statement. So the regular expression needs to be a little more sophisticated, using a "negative look behind."

$Filter = $Filter -replace " "(?<!\(\s*)\*","%"

To filter out cases like CONTAINS(*… , the new regular expression qualifies "Match on *", with a look behind "(?<!\(\s*)", which says, "If it isn’t immediately preceded by an opening bracket and any spaces." In regular expression syntax:

  • (?= x) says, "Look ahead for x"
  • (?<= x) says, "Look behind for x"
  • (?!= x) is “look ahead for anything EXCEPT x”
  • (?<!x) is “look behind for anything EXCEPT x”

These will see a lot of use in this function. Here, (?<!    ) is being used. The open bracket needs to be escaped, so it is written as \( , and \s* means 0 or more spaces.

Convert orphan search terms into Contains conditions

A term that needs to be wrapped as a "CONTAINS" search can be identified by the absence of quotation marks, = , < , or > signs, or the LIKE, CONTAINS, or FREETEXT search predicates. When these are present, the search term is left alone; otherwise, it goes to CONTAINS like this.

$filter = ($filter | ForEach-Object {

                 if ($_ -match "'|=|<|>|like|contains|freetext") {$_}

                 else {"Contains(*,'$_')"}

                })

Add quotation marks if the user omits them

The next thing I check for is omitted quotation marks. I said I wanted to be able to use Can*, and we’ve seen it changed to Can%, but the search term needs to be transformed into "CameraManufacturer='Can%' ". Here is a –Replace operation to do that:

$Filter = $Filter -replace "\s*(=|<|>|like)\s*([^'\d][^\s']*)$",' $1 ''$2'' '

This is a more complex regular expression which takes a few moments to understand.

Regular expression

Meaning

Application

\s*(=|<|>|like)\s*
([^'\d][^\s']*)$

Any spaces (or none)

 

\s*(=|<|>|like)\s*
([^'\d][^s']*)$

= or < or > or "Like"

CameraManufacturer=Can%

\s*(=|<|>|like)\s*
([^'\d][^\s']*)$

Anything that is NOT a ' character or a digit

CameraManufacturer=Can%

\s*(=|<|>|like)\s*
([^'\d][^\s']*)$

Any number of non-quotation mark, non-space characters (or none)

CameraManufacturer=Can%

\s*(=|<|>|like)\s*
([^'\d][^\s']*)$

End of line

 

\s*(=|<|>|like)\s*
([^'\d][^\s']*)$

Capture the enclosed sections as matches

$Matches[0]=  "=Can%"
$Matches[1]=  "="
$Matches[2]=  "Can%"

' $1 ''$2'' '0

Replace Matches[0] ("=Can%") with an expression that uses the two submatches "=" and "can%".

 = 'Can%'

Note   The expression that is being inserted uses $1 and $2 to mean matches [1] and [2]. If this is wrapped in double quotation marks, Windows PowerShell will try to evaluate these terms before they get to the regex handler, so the replacement string must be wrapped in single quotation marks. But the desired replacement text contains single quotation marks, so they need to be doubled up.

Replace '=' with 'like' for wildcards

So far, =Can* has become ='Can%', which is good, but SQL needs "LIKE" instead of "=" to evaluate a wildcard. So the next operation converts "CameraManufacturer = 'Can%' " into "CameraManufacturer LIKE 'Can%' ".

$Filter = $Filter -replace "\s*=\s*(?='.+%'\s*$)" ," LIKE "

Regular expression

Meaning

Application

\s*=\s*(?='.+%'\s*$)

= sign surrounded by any spaces

CameraManufacturer = 'Can%'

\s*=\s*(?='.+%'\s*$)

A quotation mark character

CameraManufacturer = 'Can%'

\s*=\s*(?='.+%'\s*$)

Any characters (at least one)

CameraManufacturer = 'Can%'

\s*=\s*(?='.+%'\s*$)

% character followed by '

CameraManufacturer = 'Can%'

\s*=\s*(?='.+%'\s*$)

Any spaces (or none) followed by end of line

 

\s*=\s*(?='.+%'\s*$)

Look ahead for the enclosed expression, but don't include it in the match

$Matches[0] = "="
(but only if 'Can%' is present)

Provide aliases

The previous steps reconstruct "WHERE" terms to build syntactically correct SQL, but what if I get confused and enter CameraMaker instead of CameraManufacturer or Keyword instead of Keywords? I need Aliases, and they should work anywhere in the SQL statement—not just in the "WHERE" clause, but also in "ORDER BY".

I defined a hash table (aka a "dictionary" or an "associative array") near the top of the script to act as a single place to store the aliases with their associated full canonical names, like this:

$PropertyAliases = @{Width="System.Image.HorizontalSize";

                    Height="System.Image.VerticalSize";

                      Name="System.FileName";

                 Extension="System.FileExtension";

                   Keyword="System.Keywords";

               CameraMaker="System.Photo.CameraManufacturer }

Later in the script, after the SQL statement is built, a loop runs through the aliases replacing each with its canonical name:

$PropertyAliases.Keys | ForEach-Object {
       $SQL= $SQL -replace "(?<=\s)$($_)(?=\s*(=|>|<|,|Like))",$PropertyAliases[$_]

}

A hash table has .Keys and .Values properties, which return what is on the left and right of the equals sign respectively. $hashTable.keyName or $hashtable[keyName] will return the value, so $_ will start by taking the value "width", and its replacement will be $PropertyAliases["width"], which is "System.Image.HorizontalSize". On the next pass through the loop, "height" is replaced, and so on. To ensure that it matches on a field name and not text being searched for, the regular expression stipulates that the name must be preceded by a space and followed by "="or "like", and so on.

Regular expression

Meaning

Application

(?<=\s)Width(?=\s*(=|>|<|,|Like))

The literal text "Width"

  Width > 1024

(?<=\s)Width(?=\s*(=|>|<|,|Like))

A space

 

(?<=\s)Width(?=\s*(=|>|<|,|Like))

Look behind for the enclosed expression, but don't include it in the match.

$Matches[0] = "Width"
(but only if a leading space is present)

(?<=\s)Width(?=\s*(=|>|<|,|Like))

Any spaces (or none)

 

(?<=\s)Width(?=\s*(=|>|<|,|Like))

The literal text "Like", or any of the following characters: comma, equals, greater than, or less than

  Width > 1024

(?<=\s)Width(?=\s*(=|>|<|,|Like))

Look ahead for the enclosed expression, but don't include it in the match.

$Matches[0] = "Width"
(but only if " >" is present)

Add the correct prefix if it is omitted

This builds on the ideas we've seen already. I want the list of fields and prefixes to be easy to maintain, so just after I define my aliases, I define a list of field types:

$FieldTypes = "System","Photo","Image","Music","Media","RecordedTv","Search"

For each type, I define two variables, a prefix and a fieldslist. The names must be FieldtypePREFIX and FieldTypeFIELDS. The reason for this will become clear shortly, but here is what they look like:

$SystemPrefix = "System."

$SystemFields = "ItemName|ItemUrl"

$PhotoPrefix  = "System.Photo."
$PhotoFields  = "cameramodel|cameramanufacturer|orientation"

In practice, the field lists are much longer. System contains 25 field names, not just the two shown here. The lists are written with "|" between the names so they become a regular expression meaning "ItemName or ItemUrl Or …". The following code runs after aliases have been processed:

foreach ($type in $FieldTypes) {

    $fields = (get-variable "$($type)Fields").value

    $prefix = (get-variable "$($type)Prefix").value

    $sql = $sql -replace "(?<=\s)(?=($Fields)\s*(=|>|<|,|Like))" , $Prefix

 }

I can save repeating code by using Get-Variable in a loop to get $systemFields, $photoFields, and so on. If I want to add one more field or a whole type, I only need to change the variable declarations at the start of the script. The regular expression in the -Replace works like this:

Regular expression

Meaning

Application

(?<=\s)(?=(cameramanufacturer|
orientation)\s*(=|>|<|,|Like))"

Look behind for a space, but don't include it in the match.

 

(?<=\s)(?=(cameramanufacturer|
orientation
)\s*(=|>|<|,|Like))"

The literal text "orientation"  or "cameramanufacturer"

 CameraManufacturer LIKE 'Can%'

(?<=\s)(?=(cameramanufacturer|
orientation)\s*(=|>|<|,|Like))"

Any spaces (or none)

 

(?<=\s)(?=(cameramanufacturer|
orientation)\s*(=|>|<|,|Like))"

The literal text "Like", or any of the following characters: comma, equals, greater than, or less than

 CameraManufacturer LIKE 'Can%'

(?<=\s)(?=(cameramanufacturer|
orientation)\s*(=|>|<|,|Like))"

Look ahead for the enclosed expression, but don't include it in the match.

$match[0] is the point between the leading space and "CameraManufacturer LIKE", but it doesn't include either.

 

Use ‑Replace with a regular expression

We get the effect of an "insert" operator by using ‑Replace with a regular expression that finds a place in the text, but doesn't select any of it.

This part of the function allows "CameraManufacturer LIKE 'Can%'" to become "System.Photo CameraManufacturer LIKE 'Can%' " in a WHERE clause. I also wanted "CameraManufacturer" in an ORDER BY clause to become "System.Photo CameraManufacturer".

Very sharp-eyed readers may have noticed that I look for a comma after the fieldname in addition to <, >, =, and LIKE. I modified the code that appeared in Part One so that when an ORDER BY clause is inserted, it is followed by a trailing comma like this:

if ($orderby) { $sql += " ORDER BY " + ($OrderBy   -join " , " ) + ","}

The new version will work with this regular expression, but the extra comma will cause a SQL error, so it must be removed later. When I introduced the SQL, I said the SELECT statement looks like this:

SELECT System.ItemName, System.ItemUrl, System.FileExtension, System.FileName, System.FileAttributes, System.FileOwner, System.ItemType, System.ItemTypeText , System.KindText, System.Kind, System.MIMEType, System.Size

Building this clause from the field lists simplifies code maintenance, and as a bonus, anything declared in the field lists will be retrieved by the query and accepted as input by its short name.  The SELECT clause is prepared like this:

 if ($First)  {$SQL = "SELECT TOP $First "}

 else         {$SQL = "SELECT "}

 foreach ($type in $FieldTypes) {

    $SQL += ((get-variable "$($type)Fields").value -replace "\|",", " ) + ", "

 }

This replaces the "|" with a comma and puts a comma after each set of fields. This means that there is a comma between the last field and FROM. This allows the regular expression to recognize field names, but it will break the SQL, so it is removed after the prefixes have been inserted (just like ORDER BY).

This might seem inefficient, but when I checked the time it took to run the function and get the results (but not output them), it was typically about 0.05 seconds (50 ms) on my laptop. It takes more time to output the results.

Combining all the bits in this part with the bits in Part One turns my 36-line function into about a 60-line one as follows:

Function Get-IndexedItem{

Param ( [Alias("Where","Include")][String[]]$Filter ,

        [Alias("Sort")][String[]]$OrderBy,

        [Alias("Top")][String[]]$First,

        [String]$Path,

        [Switch]$Recurse )

       

    $PropertyAliases = @{Width ="System.Image.HorizontalSize";

                          Height = "System.Image.VerticalSize"}

    $FieldTypes   = "System","Photo"

    $PhotoPrefix  = "System.Photo."

    $PhotoFields  = "cameramodel|cameramanufacturer|orientation"

    $SystemPrefix = "System."

    $SystemFields = "ItemName|ItemUrl|FileExtension|FileName"

 

    if ($First)  {$SQL = "SELECT TOP $First "}

    else         {$SQL = "SELECT "}

    foreach ($type in $FieldTypes) {

        $SQL += ((get-variable "$($type)Fields").value -replace "\|",", ")+", "

    }

 

    if ($Path -match "\\\\([^\\]+)\\.") {

           $SQL += " FROM $($matches[1]).SYSTEMINDEX WHERE " 

    }

    else {$SQL += " FROM SYSTEMINDEX WHERE "}

 

    if ($Filter) {

         $Filter = $Filter -replace "\*","%"

         $Filter = $Filter -replace"\s*(=|<|>|like)\s*([^'\d][^\s']*)$",

                                   ' $1 ''$2'' '

         $Filter = $Filter -replace "\s*=\s*(?='.+%'\s*$)" ," LIKE "

         $Filter = ($Filter | ForEach-Object {

             if ($_ -match "'|=|<|>|like|contains|freetext") {$_}

                                    else {"Contains(*,'$_')"}

         })

         $SQL += $Filter -join " AND "

    }

 

    if ($Path) {

         if ($Path -notmatch "\w{4}:")  {$Path = "file:" + $Path}

                $Path  = $Path -replace "\\","/"

                if ($SQL -notmatch "WHERE\s$") {$SQL += " AND " }

                if ($Recurse)                  {$SQL += " SCOPE = '$Path' "}

                else                           {$SQL += " DIRECTORY = '$Path' "}

    }

    if ($SQL -match "WHERE\s*$") {

         Write-warning "You need to specify either a path , or a filter." ; return

    }

 

    if ($OrderBy) { $SQL += " ORDER BY " + ($OrderBy   -join " , " ) + ","}

 

    $PropertyAliases.Keys | ForEach-Object {

         $SQL= $SQL -replace "(?<=\s)$($_)(?=\s*(=|>|<|,|Like))",
                              $PropertyAliases.$_

    }

    foreach ($type in $FieldTypes) {

        $fields = (get-variable "$($type)Fields").value

        $prefix = (get-variable "$($type)Prefix").value

        $SQL = $SQL -replace "(?<=\s)(?=($Fields)\s*(=|>|<|,|Like))" , $Prefix

     }

 

    $SQL = $SQL -replace "\s*,\s*FROM\s+" , " FROM "

    $SQL = $SQL -replace "\s*,\s*$"       , ""

 

    $Provider="Provider=Search.CollatorDSO;"+
              "Extended Properties=’Application=Windows’;"

    $Adapter = new-object system.data.oledb.oleDBDataadapter -argument $SQL,
               $Provider

    $DS      = new-object system.data.dataset

    if ($Adapter.Fill($DS)) { $DS.Tables[0] }

}

~James

Awesome job, James! I want to thank you for taking the time to share with us today. Guest Blogger Week will continue tomorrow when James returns with Part Three.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


Customizing PowerShell Output from Windows Search

$
0
0

Summary: Guest blogger, James O'Neill, discusses customizing Windows PowerShell output from his function to search Windows Index.

Microsoft Scripting Guy, Ed Wilson, is here. Today James O’Neill provides his conclusion to this three part series on

Note   This is Part Three of a three part series. In Part One, James talked about about building a query string to search the Windows Index. In Part Two, he talked about modifying the user input to coerce it into the form required by Windows Index.

Here’s James…

In Part One, I introduced a function that queries the Windows Index by using filter parameters like these:

  • "Contains(*,'Stingray')"
  • "System.Keywords = 'Portfolio' "
  • "System.Photo.CameraManufacturer LIKE 'CAN%' "
  • "System.image.horizontalSize > 1024"

In Part Two, I showed how these parameters could be simplified to do the following:

  • Stingray:
    A word on its own becomes a Contains term
  • Keyword=Portfolio:
    Keyword, without the S is an alias for System.Keywords and quotation marks will be added automatically
  • CameraManufacturer=CAN*:
    * will become %, and = will become LIKE, quotation marks will be added, and CameraManufacturer will be prefixed with System.Photo
  • Width > 1024:
    Width is an alias or System.image.horizontalsize, and quotation marks are not added around numbers.

There is one remaining issue. Windows PowerShell is designed so that one command's output becomes another's input. This function is not going to do much with piped input. I cannot see another command spitting out search terms for this one, nor can I see multiple paths being piped in. But the majority of items found by a search will be files. So it should be possible to treat them like files, piping them into Copy-Item or whatever.

The following was my first attempt at transforming the data rows into something more helpful:

$Provider="Provider=Search.CollatorDSO; Extended Properties=’Application=Windows’;"

$adapter = new-object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider

$ds      = new-object system.data.dataset

if ($adapter.Fill($ds)) { foreach ($row in $ds.Tables[0])  {

    if ($row."System.ItemUrl" -match "^file:")
      {

          $obj = New-Object psobject -Property @{
          Path = (($row."System.ItemUrl" -replace "^file:","") -replace "\/","\")}

      }

    Else {$obj = New-Object psobject -Property @{Path = $row."System.ItemUrl"}}

    Add-Member -force -Input $obj -Name "ToString" -MemberType "scriptmethod" `

           -Value {$this.path}

    foreach ($prop in (Get-Member -InputObject $row -MemberType property |

                       where-object {$row."$($_.name)" -isnot [system.dbnull] }))

      {

          Add-Member -ErrorAction "SilentlyContinue" -InputObject $obj `

             -MemberType NoteProperty  -Name (($prop.name -split "\." )[-1]) `

             -Value  $row."$($prop.name)"

      }

    foreach ($prop in ($PropertyAliases.Keys |

          Where-Object {$row."$($propertyAliases.$_)" -isnot [system.dbnull] }))

      {

          Add-Member -ErrorAction "SilentlyContinue" -InputObject $obj `

             -MemberType AliasProperty -Name $prop `

             -Value ($propertyAliases.$prop  -split "\." )[-1]

      }

    $obj

}}

This is where the function spends most of its time:

  • Looping through the data and creating a custom object for each row.
  • Giving non-file items a Path property that holds the System.ItemURL property.
  • Processing the ItemUrl for files into normal format (rather than the format file:c/users/james).

In many cases, the item can be piped to another command successfully if it has a Path property. Then, for each property (database column) in the row, a member is added to the custom object with a shortened version of the property name and the value (assuming the column isn't empty).

Next, Alias properties are added by using the definitions in $PropertyAliases. Finally, some standard members get added. In this version I've pared it down to a single method, because several things expect to be able to get the path for a file by calling its tostring() method.

When I had all of this working, I tried to get clever. I added aliases for all the properties that normally appear on a System.IO.FileInfo object. I even tried fooling the formatting system in Windows PowerShell into treating my file items as a file object—something that only needs one extra line of code:

$Obj.psobject.typenames.insert(0, "SYSTEM.IO.FILEINFO")

Pretending that a custom object is actually another type seems dangerous; but everything I tried seemed happy, provided the right properties were present. The formatting worked except for the "Mode" column. I found the method that it calculates: .Mode for FILEINFO objects. But it needs a real FILEINFO object. It was easy enough to get one—I had the path, and it only needs a call to Get‑Item.

But I realized that if I was getting a FILEINFO object anywhere in the process, it made more sense to add extra properties to that object and dispense with the custom object. I added an extra -NoFiles switch to supress this behavior. So the code then transformed into the following:

$Provider="Provider=Search.CollatorDSO; Extended Properties=’Application=Windows’;"

$adapter = new-object system.data.oledb.oleDBDataadapter -argument $SQL, $Provider

$ds      = new-object system.data.dataset

if ($adapter.Fill($ds)) { foreach ($row in $ds.Tables[0])  {

    if (($row."System.ItemUrl" -match "^file:") -and (-not $NoFiles)) {
      {

        $obj = Get-item -Path (($row."System.ItemUrl" -replace "^file:","") `
                                 -replace "\/","\")

      }

    Else {$obj = New-Object psobject -Property @{Path = $row."System.ItemUrl"}

          Add-Member -force -Input $obj -Name "ToString" `

                     -MemberType "scriptmethod" -Value {$this.path}

         }

   ForEach ...

The initial code was 36 lines. Making the user input more friendly took it to 60 lines. The output added about another 35 lines—bringing the total to 95 lines.

There were four other kinds of output that I wanted to produce:

  • Help. I added comment-based Help with plenty of examples. It runs 75 lines, making it the biggest constituent in the finished product. In addition, I have 50 lines that are comments or blank for readability as insurance against trying to understand what those regular expressions do after a few months' time. But there are only 100 lines of actual code.
  • A –list switch which lists the long and short names for the fields (including aliases).
  • Support for the –Debug switch. Because so many things might go wrong, I have Write‑Debug $SQL immediately before I carry out the query. And to enable that, I have [CmdletBinding()] before I declare the parameters.
  • A –Value switch which uses the GROUP ON… OVER…  search syntax so I can see what the possible values are in a column.

GROUP ON queries are unusual because they fill the dataset with two tables. GROUP ON System.kind OVER ( SELECT STATEMENT) will produce a something like this as the first table:

SYSTEM.KIND                     Chapter

-----------                     -------

communication                         0

document                              1

email                                 2

folder                                3

link                                  4

music                                 5

picture                               6

program                               7

recordedtv                            8

The second table is the normal data suitably sorted. In this case, it has all the requested fields grouped by kind plus one named Chapter, which ties into the first table. I'm not really interested in the second table, but the first table helps me know if I should enter "Kind=Image", "Kind=Photo", or "Kind=Picture".

I have a Select-List function that I use in my configurator and Hyper-V library on CodePlex. With this, I can choose which recorded TV program to watch, first selecting by title, and then if there is more than one episode, by episode.

$t=(Get-IndexedItem -Value "title" -filter "kind=recordedtv" -recurse |
    Select-List -Property title).title

    start (Get-IndexedItem -filter "kind=recordedtv","title='$t'" -path |
    Select-List -Property ORIGINALBROADCASTDATE,PROGRAMDESCRIPTION)

The full script can be found on the Repository here.

~James

Thank you, James. This is a great series of blogs. Thank you for your hard work on this project and for taking the time to share it with us.

Join me tomorrow for more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Manage an EqualLogic SAN

$
0
0

Summary: Guest blogger, Mike Robbins, talks about using Windows PowerShell to manage an EqualLogic SAN.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a returning guest blogger, Mike F. Robbins, who will talk about Managing an EqualLogic PS Series storage area network with PowerShell.

Now, here is Mike...

Photo of Mike Robbins

Mike F. Robbins is an MCITP, Windows PowerShell Enthusiast, IT Pro, senior systems engineer for Windows Server, Hyper-V, SQL Server, Exchange Server, SharePoint, Active Directory, and EqualLogic storage area networks. He has over eighteen years of professional experience providing enterprise computing solutions for educational, financial, health care, and manufacturing customers.

Blog: http://mikefrobbins.com

Twitter: @mikefrobbins

In my last Hey, Scripting Guy! blog post, Managing Symantec Backup Exec 2012 with PowerShell, I referenced that Windows PowerShell was something that could be used to manage everything in your datacenter, from product backup to the storage area network. Today I get a chance to discuss the second half of that (the storage area network portion).

The first thing you need to manage your EqualLogic PS Series storage area network (SAN) with Windows PowerShell is the EqualLogic Host Integration Tools for Microsoft also known as the HIT Kit for Microsoft, which can be downloaded from the EqualLogic support site. The most recent version of the HIT Kit is version 4.0.0, which was released in January 2012. It contains 67 Windows  PowerShell cmdlets.

We will start by creating a new volume on our EqualLogic SAN with Windows PowerShell, and we will set up everything that you may forget about through the GUI, such as a snapshot schedule. There is nothing worse than needing something from a snapshot only to figure out, “Oh yeah, I forgot to set that up.” I am going to define all of the parameters as variables because I will be reusing them for each of the scripts that we will run against our SAN. It is also easier to simply modify the variables at the top of the script for each new volume that is provisioned instead of having to manually find the values that are buried in the script somewhere.

In my opinion, there are too many values to use a parameterized query and anyone who is allowed to provision storage on my SAN will be qualified to modify this script. I save the script for each of the volumes when they are created in case I ever need to know how they were originally provisioned or if they need to be re-created due to a disaster.

The following script creates a thin provisioned 36 GB volume named “mikefrobbins” with a snapshot reserve of 100%. It sets the description, creates access control list based on the iSCSI host IP address, and schedules a snapshot of the volume once a day at 1:00 AM. It will attempt to keep seven snapshots provided that their combined size doesn’t exceed the 100% snapshot reserve.

$GrpAddr = "10.100.100.100"

$VolName = "mikefrobbins"

$VolSize = "36864"

$SnapshotReserve = "100"

$Description = "C Drive for mikefrobbins WebServer"

$ThinProvision = "Yes"

$iSCSI1 = "10.0.0.1"

$iSCSI2 = "10.0.0.2"

$ACL = "volume_and_snapshot"

$SchName = "wwwDailySnapshot"

$SchType = "Daily"

$Start = "01:00AM"

$Repeat = "0"

$Count = "7"

Import-Module "c:\program files\EqualLogic\bin\EqlPSTools.dll"

Connect-EqlGroup -GroupAddress $GrpAddr -Credential (Get-Credential)

New-EqlVolume -VolumeName $VolName -VolumeSizeMB $VolSize -SnapshotReservePercent `

 $SnapshotReserve -VolumeDescription $Description -ThinProvision $ThinProvision

New-EqlVolumeACL -VolumeName $VolName -InitiatorIpAddress $iSCSI1 -ACLTargetType $ACL

New-EqlVolumeACL -VolumeName $VolName -InitiatorIpAddress $iSCSI2 -ACLTargetType $ACL

New-EqlSchedule -VolumeName $VolName -ScheduleName $SchName -ScheduleType $SchType `

 -StartTime $Start -TimeFrequency $Repeat -KeepCount $Count

Image of command output

When the previous script is run, you will be prompted for a username and password for the SAN as shown here:

Image of prompt

Grpadmin is the built in “administrator” account for all EqualLogic PS Series SANs. I recommend creating a personalized admin account for each of your SAN administrators for auditing purposes, and only using the grpadmin account for password recovery purposes so you do not have to call support as referenced in my blog post on that subject, EqualLogic PS Series SAN Password Recovery.

Because I have not disconnected my session from the SAN, I can continue to manage it without having to reconnect. Here I will take a manual snapshot of the volume that was created in the previous step:

New-EqlSnapshot -VolumeName $VolName

Image of command output

The following code determines what snapshots exist for this volume:

Get-EqlSnapshot -GroupAddress $GrpAddr -VolumeName $VolName |

Sort-Object CreationTimeStamp -descending | Select-Object SnapshotName

Image of command output

Now, I’ll remove all snapshots for this volume except for the latest one:

$VolSnaps = Get-EqlSnapshot -GroupAddress $GrpAddr -VolumeName $VolName

$VolSnaps | Sort-Object CreationTimeStamp |

Select-Object -first (($VolSnaps).Count -1) SnapshotName |

Remove-EqlSnapshot -VolumeName $VolName

Image of command output

Now let us restore the volume to the latest snapshot. Any time a volume is restored (reverted) to a previous snapshot, a new snapshot is automatically created prior to performing the restore operation.

Get-EqlSnapshot -GroupAddress $GrpAddr -VolumeName $VolName |

Sort-Object CreationTimeStamp -descending | Select-Object SnapshotName -first 1 |

Restore-EqlSnapshot -GroupAddress $GrpAddr -VolumeName $VolName

Image of command output

It is maintenance time. I have a volume that is going to be moved from one host server to another, and I need to modify its ACLs accordingly. I’m going to place the new server’s iSCSI IP addresses into two variables, take a snapshot of the volume before making any changes to it, set the volume offline to make sure nothing is accessing it, write the current ACLs to a text file for documentation purposes, remove the current ACLs, add the new ACLs, write the new ACLs to the same text file without overwriting it, and then place the volume online so that it’s ready to be connected to the new server.

$iSCSI1 = "10.0.0.11"

$iSCSI2 = "10.0.0.12"

New-EqlSnapshot -VolumeName $VolName

Set-EqlVolume -VolumeName $VolName -OnlineStatus offline

Get-EqlVolumeACL -VolumeName $VolName | Out-File "d:\tmp\$VolName`_ACLs.txt"

Remove-EqlVolumeACL -VolumeName $VolName

New-EqlVolumeACL -VolumeName $VolName -InitiatorIpAddress $iSCSI1 -ACLTargetType $ACL

New-EqlVolumeACL -VolumeName $VolName -InitiatorIpAddress $iSCSI2 -ACLTargetType $ACL

Get-EqlVolumeACL -VolumeName $VolName | Out-File -Append "d:\tmp\$VolName`_ACLs.txt"

Set-EqlVolume -VolumeName $VolName -OnlineStatus online

Image of command output

Now let’s remove the volume. The volume must first be taken offline to remove it.

Set-EqlVolume -VolumeName $VolName -OnlineStatus offline

Remove-EqlVolume -VolumeName $VolName –Force

Image of command output

Using the –Force parameter in the previous script prevents the following confirmation message from being displayed. Use extreme caution when using this parameter; otherwise it could create an RGE.

Image of message

Check the firmware version on an EqualLogic PS Series SAN as follows:

Get-EqlMember –GroupAddress $GrpAddr |

Select-Object MemberName, FirmwareVersion | Format-Table -AutoSize

Image of command output

Last but not least, remember to disconnect from the SAN when you’re finished:

Disconnect-EqlGroup -GroupAddress $GrpAddr

Image of command output

The full script can be found in the Script Center Repository.

~Mike

Thank you, Mike for another great blog showing us the various uses of Windows PowerShell. Join me tomorrow for a great guest blog about Windows PowerShell and SQL Server by guest blogger, Luarte Jr.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Troubleshoot SQL Server via the Error Log

$
0
0

Summary: Guest blogger, Laerte Junior, discusses using Windows PowerShell to troubleshoot SQL Server by parsing the SQL error log. 

Microsoft Scripting Guy, Ed Wilson, is here. Today we have another guest blogger, Laerte Junior. Here is a little bit about Laerte.

Photo of Laerte Junior

Laerte Junior is a SQL Server specialist and an active member of WW SQL Server and the Windows PowerShell community. He also is a huge Star Wars fan (yes, he has the Darth Vader´s Helmet with the voice changer). He has a passion for DC comics and living the simple life. May The Force be with all of us. If you want to see more about what we can do with Windows PowerShell and SQL Server, don´t miss my LiveMeeting on July 18. For more information, see LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English.

Take it away, Laerte…

One of the most basic and efficient troubleshooting steps that a SQL Server DBA does is to look for error messages in the SQL Server error log. It is a relatively simple task when it comes to working with a single server or even a few servers running SQL Server. However, this starts to be hard job when we think about a corporate-model environment consisting of several hundred SQL Server instances. In today’s blog, we will see how to do this troubleshooting—even when the SQL Server instance is offline. The technique will scale to N servers, and produce various types of output. I hope you enjoy the blog because it has saved me a lot of time in my daily job.

If The SQL Server instance is online and you want to check the SQL error log, it is easy to use the SMO Server.  An example of doing this is presented in the excellent Hey, Scripting Guy! Blog post with information from Aaron Nelson, Use PowerShell to Get the SQL Server Error Log.

Use WMI to query an offline event log

OK, something happens on my server, the SQL Server instance is offline, and I need to check the SQL error log. How can I do this? Well, two ways of doing this are to use WMI or use SQL Server Management Studio (SSMS). A feature to read the SQL Server error log offline was introduced in SQL Server 2012. This version added two new WMI classes to the Management WMI Provider. The classes are in the ComputerManagement11 namespace:  root\Microsoft\SqlServer\ComputerManagement11. The two classes are SqlErrorLogFile and SqlErrorLogEvent

By using SSMS or the Graphical SQL Server Client tool, you can easily access the SQL error log, but it starts to be painful when you need to do it for more than one instance and if you need to filter the message to look for a specific error. Because of the difficulty in working with multiple instances and searching multiple logs, I want to use Windows PowerShell.

On a test SQL Server instance, stop the SQL Server service. First, let´s take a look at the Windows Management Instrumentation classes that are contained in the ComputerManagement11 namespace. The following code accomplishes this task. 

Get-WmiObject  -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" –List

The output from the previous command is shown here.

Image of command output

Yes, you can play around with ALL these classes. Is that cool or what?

The following code illustrates querying the local computer and the default SQL Server instance.

Get-WmiObject -Class SqlErrorLogEvent -computername MylocalComputer -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"

In this example, I illustrate querying the local computer and a named instance of SQL Server, inst1.

Get-WmiObject -Query "Select * from SqlErrorLogEvent where InstanceName = 'Inst1'"-Namespace "Root\Microsoft\SqlServer\ComputerManagement11"

Notice that unlike WMI for server events, each SQL Server instance has your path to the root. The path to the WMI Management Provider is always one, even with several SQL Server instances. What you need to do to access the right instance is filter the InstanceName property in your WQL. 

How do I know which properties I can use? Simply pipe the Get-WmiObject to a Get-Member cmdlet. This technique is shown here. 

Get-WmiObject  -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"  -Class SqlErrorLogEvent | Get-Member 

In addition, you might want to use some of the tools that are mentioned in Hey, Scripting Guy! How Do I Find the Names of WMI Classes?

To work on a remote computer, use the ComputerName parameter as shown here.

Get-WmiObject   -Class "SqlErrorLogEvent"  `

                                                -ComputerName MyRemoteComputer `

                                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"

 When working with multiple servers and no SQL Server named instance, you can do the following:

1. Create a .txt file with the name of the servers, such as:

Server1

Server2

 2. Use the following query:

Get-WmiObject -Class "SqlErrorLogEvent"  `

                                -ComputerName (Get-Content c:\Temp\Servers.txt) `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" |   

select @{Expression={($_.__Server) };Label = "Server"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},Message,ProcessInfo 

The ComputerName parameter in WmiObject is a string[], so its accepts an array of string objects, and you can use it with Get-Content.

We can export the error log to a CSV file, but of course, you don’t want to export ALL of the messages. Let’s create a filter that looks for the words “Error” and “Fail” and excludes the phrase “Found 0 Errors.”

By using Where-Object with the Get-WmiObject cmdlet, I come up with the command that is shown here. 

Get-WmiObject -Class "SqlErrorLogEvent"  `

                                -ComputerName (Get-Content c:\Temp\Servers.txt) `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" |

 Where-Object { ($_.Message -like "*Error*" `

                                                -or $_.Message -like "*Fail*") `

                                                -and ($_.Message -notlike "*Found 0 Errors*")} |

Select-Object        @{Expression={($_.__Server) };Label = "Server"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},Message,ProcessInfo

You can create the filter by using WMI Query Language (WQL). This results in the command that is shown here.

$Query = "Select * from SqlErrorLogEvent where (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

Get-WmiObject -Query $query `

                                -ComputerName (Get-Content c:\Temp\Servers.txt) `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" |

 Select-Object       @{Expression={($_.__Server) };Label = "Server"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},Message,ProcessInfo                            

The fatal answer that you probably are asking is, “Which is faster?” Using WQL, of course. I performed a benchmark by using Measure-Command, and the result follows:

Image of command output

 It is about milliseconds, I know. I did this on three servers, but imagine this on a several hundred servers. In this case, the change can make a BIG difference. 

Now we can export the results to a CSV file. The following code accomplishes this task.

$Query = "Select * from SqlErrorLogEvent where (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

Get-WmiObject -Query $query `

            -ComputerName (Get-Content c:\Temp\Servers.txt) `

            -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" `

| select    @{Expression={($_.__Server) };Label = "Server"},`

      InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},Message,ProcessInfo `

| Export-Csv "c:\temp\SQLErrorLog\SQLErrorLog.csv" `

                  -NoTypeInformation -Force   

Note that the search is performed in ALL of the SQL error logs. If you want to specify the last log or a specific log, just filter by using the FileName property. You can use the LogDate property to also filter by date and time. In the following technique, FileName is the physical SQL error log name, for example, ErrorLog, ErrorLog.0, ErrorLog.1… 

$Query = "Select * from SqlErrorLogEvent where FileName = 'Errorlog' and LogDate >= 'First Date' and <= 'Second Date'"

When you work with multiple servers (default and named SQL Server instances), we need to make a simple change in the code because as we saw previously, we need to filter for the specific instance name in the WQL statement. Now in my text file, I have the instance names as shown here: 

R2D2

R2D2\Inst1

Yoda

Obiwan

The code can be something like the code that follows: 

Get-Content c:\Temp\Servers.txt | % {

 

                #split the computer and SQL Server Instance Name

                $SplitNames = $_.split('/')

               

                #is Default instance ?

                if ($SplitNames[1] -ne $null -and  $SplitNames[1] -ne  'MSSQLSERVER') {

                                $Query = "Select * from SqlErrorLogEvent where (InstanceName = '$($SplitNames[1])') and  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                } else {

                                $Query = "Select * from SqlErrorLogEvent where  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                }              

 

                #Get the computerName

                $ComputerName = $SplitNames[0]

                Get-WmiObject   -query $query `

                                -ComputerName $ComputerName `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" `

                | select @{Expression={($_.__Server) };Label = "Server"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"}Message,ProcessInfo

}

Writing the data to a SQL database

Now suppose that I have several servers SQL Server and several named instances. Does the query need to be serialized? No, it can be done in asynchronous mode by using background Windows PowerShell jobs and storing the results it in a SQL Server table. First, you need to download the Chad Miller´s Out-DataTable and Write-DataTable, and put them in your functions module for your Windows PowerShell profile. 

Next, we need to create a SQL Server table in the Database Repository. In this case, I use my R2D2 named SQL Instance, with the SQLServerRepository database and a table named tbl_SQLSErrorLog. Here is the TSQL command I use:

CREATE TABLE [dbo].[tbl_SQLErrorLog](

                [CurrentDate] [datetime] NULL,

                [ServerName] [varchar](20) NULL,

                [InstanceName] [varchar](20) NULL,

                [LogDate] [datetime] NULL,

                [Message] [varchar](max) NULL,

                [ProcessInfo] [varchar](50) NULL

)

Now let’s run the code without Windows PowerShell jobs.                                               

Get-Content c:\Temp\Servers.txt | Foreach-Object {

                #split the computer and Instance Name

                $SplitNames = $_.split('/')

               

                #is Default instance ?

                if ($SplitNames[1] -ne $null -and  $SplitNames[1] -ne  'MSSQLSERVER') {

                                $Query = "Select * from SqlErrorLogEvent where (InstanceName = '$($SplitNames[1])') and  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                } else {

                                $Query = "Select * from SqlErrorLogEvent where  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                }              

 

                #Get the computerName

                $ComputerName = $SplitNames[0]

                $Data = (Get-WmiObject   -query $query `

                                -ComputerName $ComputerName `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" |

Select-Object        @{Expression={(Get-Date) };Label = "CurrentDate"},`

                                                @{Expression={($_.__Server) };Label = "ServerName"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},`

                                                                Message,

                                                                ProcessInfo )

                $DataTable = Out-DataTable -InputObject $Data

                Write-DataTable -ServerInstance R2D2 -Database SQLServerRepository -TableName tbl_SQLErrorLog -Data $DataTable

}

Now place the code into asynchronous mode by using background Windows PowerShell jobs. This command is shown here: 

Get-Content c:\Temp\Servers.txt | % {

 

                #split the computer and Instance Name

                $SplitNames = $_.split('/')

               

                #is Default instance ?

                if ($SplitNames[1] -ne $null -and  $SplitNames[1] -ne  'MSSQLSERVER') {

                                $Query = "Select * from SqlErrorLogEvent where (InstanceName = '$($SplitNames[1])') and  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                } else {

                                $Query = "Select * from SqlErrorLogEvent where  (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%')"

                }              

 

                #Get the computerName

                $ComputerName = $SplitNames[0]

               

                Start-job -Name "$($ComputerName)$($SplitNames[1])" -InitializationScript  {Ipmo Functions -Force -DisableNameChecking} `

                -ScriptBlock { $Data = (Get-WmiObject            -query $args[0]  `

                                                                                -ComputerName $args[1] `

                                                                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" |

Select-Object        @{Expression={(Get-Date) };Label = "CurrentDate"},`

                                @{Expression={($_.__Server) };Label = "ServerName"},`

                InstanceName,@{Expression={([Management.ManagementDateTimeConverter]::ToDateTime($_.logdate)) };Label= "Logdate"},`

                                                                Message,

                                                                ProcessInfo )

                $DataTable = Out-DataTable -InputObject $Data

                Write-DataTable -ServerInstance R2D2 -Database SQLServerRepository -TableName tbl_SQLErrorLog -Data $DataTable

                } -ArgumentList $Query, $ComputerName

}

The output from the command is shown in the image that follows. 

Image of command output

And the Oscar goes to Windows PowerShell again. Here is the CREATE TABLE command from the SQL Server Management Studio.

Image of command output 

Can you imagine doing this in Windows PowerShell 3.0 by using workflows and the ForEach statement in parallel?  I can...and I DO need a good Brazilian coffee to digest the idea.

 Tip   You can schedule and run it in a SQL Server Agent job, but you need to add one line to the code.

For more information, see my post about it, Dooh PowerShell Trick–Running Scripts that Has Posh Jobs on a SQL Agent Job.

Creating alerts to specific problems

I need an alert to a specific problem that is logged in the SQL Server error log. Sometimes we need an alert that is so specific and temporary  that is hard for a third-party tool to have it. Then we can use the Register-WmiEvent cmdlet. 

Note   I will not discuss WMI, WQL, and Temporary events. If you want to study them, I recommend the eBook written by Windows PowerShell MVP, and a great friend, Ravikanth Chaganti, WMI Query Language via PowerShell. You can also investigate my articles on Simple-Talk and on my blog. You should also review An Insider’s Guide to Using WMI Events and PowerShell, which references and more than 300 pages of Hey, Scripting Guy! Blogs on the topic. 

In the WMI for server events, we can configure an alert to audit a table statement by using the class Alter_Table and the WQL “Select * from ALTER_TABLE”. This is simple because the class is an event class. Unfortunately, in the Computer Management WMI, we don’t have an event class, so we need to use a generic type of the WQL event query. Let’s configure an event that is to be fired when a logon fails on a computer named Yoda from my workstation, R2D2. Remote WMI.

$query =                "              Select * FROM __InstanceCreationEvent WITHIN 1

                                                                WHERE TargetInstance ISA 'SqlErrorLogEvent' 

                                                                and TargetInstance.Message like '%Login failed for user%'"

                                                               

Register-WMIEvent            -ComputerName Yoda `

                                -Namespace "Root\Microsoft\SqlServer\ComputerManagement11" `

                                -Query $query `

                                -SourceIdentifier "SQLErrorLOG" `

                                -Action { Write-Host -ForegroundColor Yellow "It WOOOORKS";$global:MyEvent = $event

                                                }

Now, try to log on by using SSMS on Yoda and using a logon that does not exist to see if the event is triggered. But first, I need to obtain the information about the event because I added $global:MyEvent = $event. All information about the event is stored in a variable called $event. Because  Register-WmiEvent starts a Windows PowerShell job, and we know that it runs in another space, and we don’t have access to it outside its space, the MyEvent global variable will store the $event

After the event triggers, you can type $MyEvent to see all the information. For more information, see my blog, Create a Monitoring Server for SQL Server with PowerShell

That is it, guys. I hope you enjoy it. Thanks to my friend, Ed Wilson, who kindly give me the honor to have a guest post in the major source of information for Windows PowerShell, the Hey, Scripting Guy, Blog.

~ Laerte

Thank you, Laerte, for a great blog post. I love the way you tailored your scenario, and came back around to monitoring.  Great job. Thanks for sharing.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace. 

Ed Wilson, Microsoft Scripting Guy

The Scripting Guy’s Reflections about TechEd 2012 in Orlando

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, provides reflections about TechEd 2012 in Orlando, Florida.

Microsoft Scripting Guy, Ed Wilson, is here. Well, I wanted to do a short introduction sort-of-piece about Microsoft TechEd 2012. Tomorrow I will post a guest blog written by the Teresa Wilson, aka the Scripting Wife (in addition to our regularly scheduled blog post). In that blog, Teresa talks about the many conversations she had with members of the Windows PowerShell community. She will review resources she pointed people to and share some of her insights and humor. Then on Saturday and Sunday, we have guest blogs written by the two winners of the 2012 Scripting Games. Neither of our winners had been to TechEd before, and being allowed to share their experiences was a wonderful feeling for them. I am certain you will enjoy all these blogs immensely.

So, what did I like about TechEd 2012 in Orlando, Florida? Well, for one thing—indeed the main thing—was being able to meet some of the thousands of people who read the Hey, Scripting Guy! Blog on a daily basis, and share their enthusiasm for one of the most exciting technologies to come along in years.

During the four days of TechEd, there was a steady stream of people through the Scripting Guys booth. In fact, there was never a time when there were not at least a couple of people hanging out at the booth asking questions about Windows PowerShell, chatting with our guests, or discussing things with fellow visitors. Because of our schedule of guests, people knew that they could come to the Scripting Guys booth at a certain time and find guests such as Jeffrey Snover, Don Jones, Jeffery Hicks, Mark Minasi, or any one of the dozens of other guests we had lined up. This kept interest up during the TechEd event, and helped ensure a steady flow of visitors.

Because there were always interesting people hanging around, more people came around to talk. In the following photo, Daniel Cruz, Rohn Edwards, Lido Paglia, and TJ Turner discuss some of the finer points of Windows PowerShell in an impromptu discussion that quickly escalated into a full-blown geek fest.

Photo from TechEd

One of the highlights (actually two of the highlights) of TechEd were the two book signings I did at the O’Reilly booth. The first one sold out, leaving dozens of disappointed fans to contend with. So Ken Jones, my acquisitions editor, was able to obtain another box of my Windows PowerShell 2.0 Best Practices book, and he quickly arranged a second signing. It was a lot of fun, and a great chance to meet people. Here is the sign that announced the first book signing.

Photo from TechEd

So, what were some of the common questions? One common question was, “What good is Windows PowerShell?” This question led to me write the Hey, Scripting Guy! Blog, Top Five PowerShell Tasks a User Might Need to Accomplish.

Another question I received many times was related to the first question, but was a bit different. It was, “What is Windows PowerShell?” Based on this question, I wrote the blog, Describe Windows PowerShell to Four Types of Users.

The subject of Windows PowerShell best practices came up again and again. Of course, there were the two Windows PowerShell Best Practices birds-of-a-feather sessions that I did with Don Jones and with Jeffery Hicks, in addition to my book signing sessions. But lots of people also came by the Scripting Guys booth to talk about Windows PowerShell best practices. I used questions that were asked during the two talks to write the blog, The Top Ten PowerShell Best Practices for IT Pros.

During the long road trip that took us first to SQL Saturday in Pensacola, then to TechEd in Orlando, and finally to IT Pro Camp in Jacksonville, I was able to do a bit of coding (something I completely was not able to do during the 16-hour days at TechEd). I wrote two add-ons for the Windows PowerShell ISE.

The first o add-on copies a script from one pane in the ISE to a new pane. This is great for when you want to edit a script but not mess up your source code. It is also useful for debugging a script. The second add-on will automatically indent your code in the ISE. It uses spaces instead of tabs. The amount of space is configurable. I was also able to find time to blog about these two add-ons. The first one is Weekend Scripter: Copy Text From One Tab in the PowerShell ISE to the Next and the second one is Weekend Scripter: Automatically Indent your PowerShell Code in the ISE.

I invite you to follow me on Twitter or Facebook. If you have any questions, send email to me at scripter@microsoft.com or post them on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Weekend Scripter: 2012 PowerShell Scripting Games Beginner Winner TechEd Report

$
0
0

Summary: The Windows PowerShell 2012 Scripting Games Beginner category winner writes about his experience at Microsoft TechEd in Orlando.

Microsoft Scripting Guy, Ed Wilson, is here. This weekend we will hear the impressions about Microsoft TechEd 2012 North America from the two winners of the 2012 Scripting Games. The winners, as you may recall, won free passes to TechEd 2012. Today we hear from Lido Paglia, the winner in the Beginner category of the 2012 Scripting Games.

Lido Paglia is an IT Pro from the Philadelphia area working in higher education where he serves as a systems engineer supporting Microsoft Exchange Server and SharePoint. The following picture is Mike F. Robbins on the left and Lido on the right. (Mike was the third-place winner of the Beginner category in the 2012 Scripting Games).

Photo at TechEd

Take it away Lido…

My motivation to participate in the 2012 Scripting Games was to see if I really could submit what the judges would consider ”quality" scripts or one-liners that met the requirements. I honestly did not even entertain winning so much as seeing how I performed in each event. So when I found out I was headed to TechEd, I was pleasantly surprised, and I did not quite realize all the additional benefits that were coming my way.

As the countdown to TechEd rolled on during the weeks that followed the Scripting Games, I wasn't quite sure what to expect. This was my first time attending TechEd so I began following the #msTechEd hash tag on Twitter to keep up with the buzz surrounding the conference. I also set my sights on using the schedule builder to pick some sessions to attend. A number of great sessions were on the agenda, especially about Windows PowerShell.

After flying into Orlando, getting registered for the conference, and dropping my bags at the hotel early Sunday evening, it was time to head to The Krewe Meet ‘n Greet party. Shortly after entering the door, I ran into the Scripting Guy and the Scripting Wife. Seconds later, they had me shaking hands with Aleksandar Nikolic, Don Jones, Jason Hofferle, Sean Kearney, and Daniel Cruz to name a few.

The Krewe party was the kick-off to my favorite part of TechEd, which was getting to meet so many great people in and around the Windows PowerShell community. Fellow IT Pros, developers, well-renowned authors, bloggers, community members, and members of the Windows PowerShell product team were all there. These are all people who I was familiar with by way of their books, blogs, or tweets and getting to meet and speak with so many of them really made my trip to TechEd special. Ed Wilson, Rohn Edwards (the winner of the Advanced category for the 2012 Scripting Games), and I even got to have lunch with Jeffrey Snover where we covered topics like “it’s always sunny in Philadelphia” to the number of surprisingly few people working on the Windows PowerShell team.

Meeting such amazing folks was not the only thing going on at TechEd. During the week, I sat in some hands-on labs, and I even got a quick tour of the hardware that hosts some 3700 virtual machines. I attended a number of filled-to-capacity sessions, like Mark Russinovich's Malware Hunting with the SysInternals tools, where Mark demoed remediating Stuxnet and Flame. I even took a free certification exam. All that plus the recorded sessions, which I'm now downloading via a Windows  PowerShell script kept me quite busy.

The following photo represents one such meeting with some of the great people at TechEd. Left to right are: Osama Sajid (program manager for Windows Server Manageability), Rohn Edwards, and me.

Photo at TechEd

Speaking of sessions…One of my favorites was Advanced Automation Using Windows PowerShell 3.0 by Hemant Mahawar and Travis Jones. This session provides a sneak peak at the future of IT automation on the Windows platform. Complimentary to that session was a Friday morning post-conference session where Travis and Hemant basically hosted a script club as they walked us through the steps of building and deploying an environment leveraging Workflow in Windows PowerShell 3.0 on our own laptops. What a neat way to wrap up the week.

I cannot talk about TechEd without extending a special thank you to Ed and Teresa. The Scripting Guy and Scripting Wife really made my first time attending TechEd a welcome and unforgettable experience. Hanging out at the Scripting Guys booth was great fun. I am looking forward to participating in next year's Scripting Games (I’ll be stepping up to the Advanced category), and I hope you are too.

~Lido

Lido, great report. Thank you for taking the time to share your experiences with us. It was great to meet you and to have the opportunity to hang out for a few days. I am glad you are aiming to up your game next year as you dive into the Advanced category. I can tell you that the competition is stiff! Everyone, it is best to begin preparation now—it is only 10 months until the Scripting Games 2013.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>