Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

The Best Way to Use PowerShell to Delete Folders

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, discusses three ways to use Windows PowerShell to delete folders and then selects the best.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a question. I occasionally need to delete a large number of folders. What is the easiest way to do this?

—BR

Hey, Scripting Guy! Answer Hello BR,

Microsoft Scripting Guy, Ed Wilson, is here. There are just as many ways to delete directories by using Windows PowerShell as there are ways to create new directories. Yesterday, I discussed four ways to create new folders by using Windows PowerShell. Today I want to talk about deleting directories, and I will show you three ways to delete folders. Unlike yesterday, I want to talk about what I consider the best way to delete a directory first.

Method 1: Use native cmdlets

To delete folders, I like to use the Remove-Item cmdlet. There is an alias for the Remove-Item cmdlet called rd. Unlike the md function, rd is simply an alias for Remove-Item. The following command reveals this information.

PS C:\> Get-Alias rd

 

CommandType     Name                               Definition

-----------     ----                               ----------

Alias           rd                                 Remove-Item

One of the main reasons I like to use the Remove-Item cmdlet to delete folders is that it implements the WhatIf switch. This means that I can run a command, such as deleting a bunch of folders, and see exactly which folders the command will remove. This technique is shown in the image that follows.

Image of command output

After I examine the information that is returned by the WhatIf switch, I use the Up arrow to retrieve the command, and I then use the backspace to remove the –whatif portion of the command. After it is edited, I run the command, and no information returns from the Remove-Item cmdlet. This command is shown here.

Image of command output

OK, I deleted my test directories, so it is time to create some new ones. The following code creates four test directories off of the root.

PS C:\> 1..4 | % {md "test$_"}

  

    Directory: C:\

 

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

d----         2/21/2012  11:10 AM            test1

d----         2/21/2012  11:10 AM            test2

d----         2/21/2012  11:10 AM            test3

d----         2/21/2012  11:10 AM            test4

To ensure that the test folders appear in the place I am expecting, I use the dir command (alias for Get-ChildItem) as shown here.

PS C:\> dir c:\test*

 

    Directory: C:\

 

Mode                LastWriteTime     Length Name

----                -------------     ------ ----

d----         2/21/2012  11:10 AM            test1

d----         2/21/2012  11:10 AM            test2

d----         2/21/2012  11:10 AM            test3

d----         2/21/2012  11:10 AM            test4

Method 2: FileSystemObject still works

Now, it is time to look at another method for deleting directories: the use of FileSystemObject. I first need to create an instance of FileSystemObject, then I can use the DeleteFolder method. These two commands are shown here.

$fso = New-Object -ComObject scripting.filesystemobject

$fso.DeleteFolder("C:\test*")

The use of the commands, in addition to the use of the dir command to check the status of the four test folders are shown in the image that follows.

Image of command output

Method 3: Use .NET classes

The third way I want to illustrate uses the .NET Framework System.IO.Directory class to delete a folder. It is a bit more complicated. For one thing, it does not like wild cards in the path. An example of this is shown in the image that follows.

Image of command output

The solution is to use Windows PowerShell to obtain the folders to delete, and then use the ForEach-Object cmdlet to call the method. The code to do this is shown here.

dir C:\test* | foreach { [io.directory]::delete($_.fullname) }

The use of the command and the associated output are shown in the image that follows.

Image of command output

BR, that is all there is to using Windows PowerShell to delete folders. Join me tomorrow when I talk about more cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Use PowerShell to Back Up Modified Files to the Network

$
0
0

Summary: Learn how to use Windows PowerShell to back up modified files to the network.

Hey, Scripting Guy! Question Hey, Scripting Guy! I have a folder that contains files in it, some of which I modify on a daily basis. I am wondering if I can use Windows PowerShell to back up only the modified files—those that have changed that particular day?

—NG

Hey, Scripting Guy! Answer Hello NG,

Microsoft Scripting Guy, Ed Wilson, is here. I don’t know if you have noticed it, but for the last couple of weeks, many of the topics have related to items that are to be covered in the 2012 Scripting Games. Registration is not yet open, and the PoshCode site for the 2012 Scripting Games is not yet up, but things are progressing along nicely. If you are planning to compete in the games, you should be reviewing the 2012 Scripting Games Study Guide. (If the truth be told, even if you are not competing in the Scripting Games, you should be reviewing the Study Guide because it contains great information about essential tasks faced on a daily basis by network administrators.)

Two topics to be covered in the 2012 Scripting Games are working with files and working with folders. NG, your question happens to hit both topics. At its most basic level, backing up modified files from a folder involves the following two tasks:

  1. Find all the files that have changed during a particular period of time.
  2. Copy the modified files to another location.

Find modified files

By using Windows PowerShell, this first task is incredibly simple; in fact, it is a one-liner. The one-line command to search a folder named C:\data, and all of the folders contained inside that folder, for files that are written to today is shown here.

dir c:\data -r | ? {$_.lastwritetime -gt (get-date).date}

Just a few notes about this command:

  • Dir is an alias for the Get-ChildItem cmdlet
  • –r is short for -recurse
  • ? is an alias for the Where-Object cmdlet
  • (get-date).date returns todays date as of midnight

A longer and more readable version of the previous command is shown here.

Get-ChildItem -Path c:\data -Recurse | Where-Object { $_.lastwritetime -gt (get-date).date}

The two commands are exactly the same. In the image that follows, the first command is the short version of the command, and the second command is the long version of the command. The output from each is the same.

Image of command output

One problem with the two previous commands is that if a file inside of a directory reports modification, the folder also reports as changed. For this particular scenario, NG is only interested in modified files, not the actual folders containing the files. Therefore, a change to the Where clause needs to take place so that folders are filtered, but modified files remain. The first version (the short version of the command) is shown here with the necessary addition.

dir c:\data -r | ? {!($_.psiscontainer) -AND $_.lastwritetime -gt (get-date).date}

For ease of comparison, and to better illustrate the problem, the first command (returns folders and files) and the second command (returns only files) are shown in the figure that follows.

Image of command output

Note   For more information about using and accessing special folders, refer to The Easy Way to Use PowerShell to Work with Special Folders. For more information about using Windows PowerShell to compress folders, refer to Using Windows PowerShell to Compress Folders

Copy modified files to the network

To copy the modified files to the network is a rather easy task. I pipe the results of the command that obtains all the changed files to a Foreach-Object cmdlet, and then I use the Copy-Item cmdlet to copy the files to the network shared folder.

Unfortunately, the Copy-Item cmdlet is not smart enough to accept the objects that are returned by the Get-ChildItem cmdlet as direct input. It will accept strings that represent paths to files as pipelined input, but not the results from the Get-ChildItem. When I see that a command is unable to accept pipelined input in the way I want to provide it, it nearly always means that I can accomplish what I want to do by using the Foreach-Object cmdlet. The two parameters used by the Copy-Item cmdlet are the Path to the original file, and a path to the destination file. The Destination parameter only needs the path to the folder, and not the complete file to the actual file name. The % symbol is an alias for the Foreach-Object cmdlet.

dir c:\data -r | ? {!($_.psiscontainer) -AND $_.lastwritetime -gt (get-date).date} |

% {Copy-Item -path $_.fullname -destination \\hyperv1\shared\backup}

If you decide that you would like to use the Windows Task Scheduler to schedule the previous command, see Use Scheduled Tasks to Run PowerShell Commands on Windows.

NG, that is all there is to using Windows PowerShell to back up a folder. Join me tomorrow for more Windows PowerShell cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Test Connectivity on Remote Servers

$
0
0

Summary: Microsoft Scripting Guy talks about using Windows PowerShell to test connectivity after a network configuration change.

Microsoft Scripting Guy, Ed Wilson, is here. It seems that there are always good news/bad news types of scenarios. After months of trying to obtain faster Internet connectivity at the house, I finally found a source that doubled the upload and download speeds for the Scripting Wife and I, at literally half the price we had been paying. Talk about a deal. Talk about not just good news, but GREAT NEWS. Now for the bad news part of the equation…

The router they showed up with at the house—part of the free installation—would not accept a static IP address on the Ethernet side of the equation that was compatible with my current network configuration—and I have a relatively complex configuration that involves multiple subnets.

Writing a quick script to ensure connectivity

When I ran into problems, I decided to write a really quick script to ping the most critical servers on my network to ensure connectivity. I wanted to ensure that DNS resolution worked, so I did the ping by name instead of by IP address. I manually created an array of computer names that I stored in a variable called $servers. To walk through the array of servers, I used the ForEach statement. The $s variable represents the actual computer name inside the loop. This is shown here.

$servers = "dc1","dc3","sql1","wds1","ex1"

Foreach($s in $servers)

{

To ping the computers, I use the Test-Connection cmdlet. In the definition of the Test-Connection command, I determine the buffer size, the number of pings to submit, and whether it returns detailed ping status information or a simple Boolean value. I specify the ea parameter (ea is an alias for the ErrorAction parameter) value of 0. The value of 0 tells the cmdlet to not display error information (an error displays when a ping is unsuccessful).

Note   For more information about using the Test-Connection, cmdlet refer to Query AD for Computers and Use Ping to Determine Status.

Because the cmdlet returns a Boolean value, I am able to simplify the code a bit. I therefore state that if the command does not return a True value, I want to perform additional actions. This portion of the script is shown here.

if(!(Test-Connection -Cn $s -BufferSize 16 -Count 1 -ea 0 -quiet))

  {

Because I am hiding errors that are returned from the Test-Connection cmdlet, I decided to display my own status information. I print out a message that lets me know a problem occurs with reaching the computer. I then decide to run three commands. The first flushes the DNS cache, the second updates the DNS registration, and the third performs an NSLookup on the server. Finally, I ping the computer again. This portion of the script is shown here.

"Problem connecting to $s"

   "Flushing DNS"

   ipconfig /flushdns | out-null

   "Registering DNS"

   ipconfig /registerdns | out-null

   "doing a NSLookup for $s"

   nslookup $s

   "Re-pinging $s"

     if(!(Test-Connection -Cn $s -BufferSize 16 -Count 1 -ea 0 -quiet))

      {"Problem still exists in connecting to $s"}

       ELSE {"Resolved problem connecting to $s"} #end if

   } # end if

} # end foreach

When I ran the PingTest.ps1 script on my computer, the following output appeared in the Windows PowerShell ISE.

Image of command output

The complete PingTest.ps1 script is shown here.

PingTest.PS1

$servers = "dc1","dc3","sql1","wds1","ex1"

Foreach($s in $servers)

{

  if(!(Test-Connection -Cn $s -BufferSize 16 -Count 1 -ea 0 -quiet))

  {

   "Problem connecting to $s"

   "Flushing DNS"

   ipconfig /flushdns | out-null

   "Registering DNS"

   ipconfig /registerdns | out-null

  "doing a NSLookup for $s"

   nslookup $s

   "Re-pinging $s"

     if(!(Test-Connection -Cn $s -BufferSize 16 -Count 1 -ea 0 -quiet))

      {"Problem still exists in connecting to $s"}

       ELSE {"Resolved problem connecting to $s"} #end if

   } # end if

} # end foreach

This script took me less than five minutes to write, and it saved me much more time than that if I had to do a lot of manual troubleshooting. If I needed the output to a text file, I would have called the script from the Windows PowerShell console and redirected the output to a text file. I did not need to do that, but it would have been simple.

Note   For more information about writing to text files, see The Scripting Wife Learns to Work with Text Files.

If I needed to run the script from each of the critical servers (so they could check for connectivity to each other), I would have copied the script to a network share, and then used Windows PowerShell remoting to run the script on each of the servers. I would also have redirected the output to a text file, but in this example, I would have used a text file on the same network share as the script.

Note   For more information about Windows PowerShell remoting, see these Hey, Scripting Guy! blogs.

I invite you back tomorrow for the Weekend Scripter, when I will talk about copying Windows PowerShell Scripts to a network share.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Learn How to Use PowerShell to Configure Jump Lists

$
0
0

Summary: Microsoft PFE, Chris Wu, discusses using Windows PowerShell to configure Jump Lists.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome to the weekend. Today we have a special treat in the form of a guest blogger. The Scripting Wife and I had the good fortune to meet Chris Wu in Montreal, Canada when we were there doing a Windows PowerShell workshop for Microsoft Premier Customers. Here is a bit about Chris…

Chris Wu started his career at Microsoft in 2002, first as a support engineer in the Microsoft Global Technical Support Center in China to support the various components of the base operating system. Now he works as a premier field engineer in Canada, and he specializes in platform and development support. During the course of troubleshooting, performance tuning, and debugging, he has created many utilities to ease and facilitate the process by leveraging various programming languages, like C, C++, and C#. And Windows PowerShell has become his new favorite.

Photo of Chris Wu

Take it away Chris…

One of the many features that I like about Windows 7 is the much improved taskbar. Specifically, I am a big fan of Jump Lists, which is a feature that enables users open favorite documents, pictures, websites, and utilities associated with an application. All this is accessible through a right-click on the application’s taskbar icon—even without the application being started first.

As serious Windows PowerShell users, just like me, you might have been tempted to pin the Windows PowerShell ISE to the taskbar. You would then end up with the disappointment of having nothing on its Jump List, as shown here.

Image of Jump List

Not only the application fails to create shortcuts to Windows PowerShell Console application or “Run as Administrator” mode, but also it won’t populate frequently used script files. (Remember that .ps1 documents are associated with Notepad instead of Windows PowerShell applications, unless another scripting environment is installed.) So it is time to change it—by using Windows PowerShell scripts, of course.

Windows 7 provides Windows Shell APIs that allow applications to alter Jump Lists (and to achieve many other Windows Shell features). Unfortunately, these APIs are written in native code without .NET implementation. Technically, it’s possible to wrap a needed API in C# code and embed it into a Windows PowerShell script, but this approach is not my intention (and it is probably out of my capability). Lucky for .NET programmers and Windows PowerShell scripters, Microsoft has already released Windows API Code Pack for Microsoft .NET Framework, which will make our lives much easier.

As far as a Jump List is concerned, only two precompiled DLLs from the Windows API Code Pack are needed. So download the current release, and then in the Binaries folder, extract Microsoft.WindowsAPICodePack.dll and Microsoft.WindowsAPICodePack.Shell.dll to a folder, for example, C:\Tools. And now it’s time to have fun!

Add-Type -Path "c:\tools\Microsoft.WindowsAPICodePack.dll"

Add-Type -Path "c:\tools\Microsoft.WindowsAPICodePack.Shell.dll"

$JumpList = [Microsoft.WindowsAPICodePack.Taskbar.JumpList]::CreateJumpList()

 

$Link = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListLink `

   -ArgumentList"powershell.exe","PS Console"

 

$JumpList.AddUserTasks($Link)

$JumpList.Refresh()

The following image shows the script and its associated output.

Image of command output

Well, this list might still lack some elements to be called appealing, but it’s indeed an achievement, considering that we made it with merely six lines of code. To get the most out of a Jump List, one needs to dig into the details of the taskbar classes and their members. And there are great resources available on Internet, including:

Within the features that are provided by the Shell APIs, these can be rather easily achieved as follows:

  • Associate an icon to a Jump List item
  • Use a separator
  • Create custom-named categories to organize items

And here is the code snippet:

Add-Type -Path "c:\tools\Microsoft.WindowsAPICodePack.dll"

Add-Type -Path "c:\tools\Microsoft.WindowsAPICodePack.Shell.dll"

$JumpList = [Microsoft.WindowsAPICodePack.Taskbar.JumpList]::CreateJumpList()

 

$Link = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListLink `

  -ArgumentList "powershell.exe","PS Console"

$Link.IconReference = new-object Microsoft.WindowsAPICodePack.Shell.IconReference `

  -ArgumentList "powershell.exe,0"

$Links = ,$Link

 

$Links += New-Object Microsoft.WindowsAPICodePack.Taskbar.JumpListSeparator

 

$Link = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListLink `

  -ArgumentList "C:\Tools","Tools"

$Link.IconReference = new-object Microsoft.WindowsAPICodePack.Shell.IconReference `

  -ArgumentList "shell32.dll,3"

$Links += $Link

 

$JumpList.AddUserTasks($Links)

 

$Category = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListCustomCategory -ArgumentList "Utilities"

$Link = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListLink -ArgumentList "notepad.exe", "Notepad"

$Category.AddJumpListItems(@($Link))

$JumpList.AddCustomCategories($Category)

 

$JumpList.Refresh()

The script and its associated output are shown here:

Image of command output

You must have noticed redundancy in the code snippet. Indeed, this is a golden opportunity to show the power of pipeline processing in Windows PowerShell. While doing so, I also added support for command parameters, document icons, and custom categories. So here we have a function called Set-JumpList:

function Set-JumpList {

  Param (

    [string] $DllFolder = ""

  )#End Param

 

  Begin {

    "Microsoft.WindowsAPICodePack.dll","Microsoft.WindowsAPICodePack.Shell.dll" |

      foreach-object {

        if ($DllFolder) { Add-Type -Path "$DllFolder\$_" -ErrorAction Stop }

        else { Add-Type -Path (get-command $_ -TotalCount 1 -ErrorAction Stop).Path -ErrorAction Stop }

      }

 

    $JumpList = [Microsoft.WindowsAPICodePack.Taskbar.JumpList]::CreateJumpList()

   

    $JumpList.ClearAllUserTasks()

    $Category = $null

  }#End Begin

 

  Process {

    $Name = ([string]$_.Name).Trim()

    $Path = [Environment]::ExpandEnvironmentVariables(([string]$_.Path).Trim())

    $Icon = [Environment]::ExpandEnvironmentVariables(([string]$_.Icon).Trim())

    $Parameter = ([string]$_.Parameter).Trim()

 

    if($Path -and ($Name -notmatch "^%%")) { # Try to resolve Path

      if (Test-Path $Path) { $Path = (Get-Item $Path).FullName }

      else { $Path = (Get-Command $Path -TotalCount 1 -ErrorAction SilentlyContinue).Path }

    }

   

    if (($Name -notmatch "^%%") -and !$Icon -and $Path) { # Try to locate the Icon reference from registry

      try {

        if ((Get-Item $Path).PSIsContainer) { $Icon = (Get-ItemProperty "Registry::HKEY_CLASSES_ROOT\Folder\DefaultIcon")."(default)" }

        else { $Icon = (Get-ItemProperty ("Registry::HKEY_CLASSES_ROOT\" + (Get-ItemProperty ("Registry::HKEY_CLASSES_ROOT\"+(Get-Item $Path).Extension))."(default)" + "\DefaultIcon"))."(default)" }

 

        if ($Icon -match "^%1") { $Icon = "$Path,0" }

       

        $Icon = $Icon.Replace('"','')

        if ($Icon -notmatch ",") { $Icon += ",0" }

      } catch {}

    }#End if

 

    if ($Name -eq "%%") { # Separator

      if ( $Category ) { # Cannot have separator inside a custom category

        $JumpList.AddCustomCategories($Category)

        $Category = $null

      }

      else { $JumpList.AddUserTasks((New-Object Microsoft.WindowsAPICodePack.Taskbar.JumpListSeparator)) } # Add a separator

    }

    elseif ($Name -match "^%%") { # New Category

      if ( $Category ) { $JumpList.AddCustomCategories($Category) } # Inside a custom category already, registry previous first

      $Category = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListCustomCategory -ArgumentList $Name.Substring(2)

    }

    elseif ( $Path -and $Name ) { # Add an item

      $Link = new-object Microsoft.WindowsAPICodePack.Taskbar.JumpListLink -ArgumentList $Path, $Name

      if ( $Icon ) { $Link.IconReference = new-object Microsoft.WindowsAPICodePack.Shell.IconReference -ArgumentList $Icon }

      if ( $Parameter ) { $Link.Arguments = $Parameter }

     

      if ( $Category ) { $Category.AddJumpListItems(@($Link)) }

      else { $JumpList.AddUserTasks($Link) }

    }#End if

  }#End Process

 

  End {

    if ($Category ) { $JumpList.AddCustomCategories($Category) }

    $JumpList.Refresh()

  }#End End

}#End function Set-JumpList

This function expects objects from the pipeline that have properties like Name, Path, Icon, and Parameter. These arguments are used to create items in the task list. Special name "%%" is reserved to create a separator in the "Tasks" section, and "%%categoryname" type of expressions can be used to create a custom-named category in the Jump List, and the items that follow will be added to this category.

Personally, I would use the ConvertFrom-Csv cmdlet to create custom objects and pipe them to the function. I’m using "|" as delimiter because the icon definition retrieved from registry sometimes contains a comma.

@"

Notepad|notepad

Calculator|calc

%%

PS Console|powershell

Command Prompt|cmd

%%Files

Config Script|C:\scripts\Config.ps1

ToDo|notepad|C:\scripts\todo.txt

%%Folders

Tools|c:\tools

"@ | ConvertFrom-Csv -Header Name,Path,Parameter,Icon -Delimiter "|" | Set-JumpList -Dll c:\tools

The -DllFolder parameter in the previous code snippet can be omitted if the two DLLs are located in one of the directories listed in the $env:Path environment variable.

So here we have a very well customized Jump List. And did I mention that after it is created, the Jump List is persistent, regardless of the running status of the application itself? Make sure that you pin the Windows PowerShell ISE to the taskbar. Then you can always find this list by right-clicking the application icon from the taskbar—even without ISE running. You can see this in the following image.

Image of command output

Cheers!

~Chris

Thank you, Chris, for sharing a cool script and technique. The entire script can be found on the Scripting Guys Script Repository

Join us tomorrow for a special report about the Scripting Games by Bartek Bielawski.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

How I won the Scripting Games, a Pass to TechEd, and Became an MVP

$
0
0

Summary: Windows PowerShell MVP, Bartek Bielawski, describes how he won the 2011 Scripting Games, a free pass to TechEd, and became a MVP.

Microsoft Scripting Guy, Ed Wilson, is here. Today, I am happy to have Bartek Bielawski as the guest blogger with a little something different. Bartek was the winner of the 2011 Scripting Games. In today’s blog, he writes about his experience and about the last year. For those of you who might not know Bartek, here is a little bit about him…

Bartek Bielawski is Windows PowerShell enthusiast, and he is still a fairly new Windows PowerShell MVP. He loves automation and he tries to automate systems whenever he can. Bartek describes his work life as, “Just a regular IT pro with a constant urge to learn more and share what he learned with others.” He uses IRC, his blogs, and forums to give back to community what he got from it—free of charge both ways.

Photo of Bartek Bielawski

Now for Bartek…

This is a true story...

Names have not been changed, because there was no reason to do so.  It’s a story how Scripting Games changed my life.

Flashback: October 1, 2011...

Received e-mail from Microsoft. Had to read it several times before it got to my brain. I’m glad my family was not in the house when I read it. Imagine seeing your father jumping around the house with strange smile on his face…

It was January 2010 when I started digging really deeper into Windows PowerShell. I came late to the game, so I had a lot to pick up. I was reading the Hey, Scripting Guy! Blog a lot, and I spotted information about a PowerScripting Podcast. Back then, I was not listening to any the podcasts, but I decided I would try this one. I got hooked on it pretty quickly, and I moved all the way back to episode 0 to start from very beginning. I knew I would have more than enough time to listen to the podcasts. Time spent on my way to work and on my way back was almost a perfect fit for 1 or 2 episodes, depending on the scale of traffic jams. It was fun to listen to those guys and their guests.

I was getting more and more familiar with Windows PowerShell concepts and with super-stars of the Windows PowerShell world. I heard about Scripting Games for the very first time when I was listening to a podcast. It was about 2009 Summer Scripting Games, and I was listening with amazement: a free contest for scripting enthusiasts? How much cooler can you get? Assuming, of course, that you are a scripting geek like myself. I continued to listen to the podcasts, and I started paying more attention to Hey, Scripting Guy! blog. I knew the Scripting Games were coming…and I was getting ready.

Flashback: May 18, 2011...

I’m in Atlanta. I’m at TechEd. I’m at dinner, and I’m surrounded by Windows PowerShell people who I never dreamed I would meet in person. Can it get any better than that?

But I am ahead of myself…Finally, the 2010 Scripting Games started. At first I was not concerned with my results. But when I noticed that I got pretty good grades, I started to compete in both the Beginner and the Advanced categories. But I didn’t dare to do both in Windows PowerShell. The Beginner category in Windows PowerShell was, in my opinion, too simple to make it elegant. So I entered the Beginner category in VBScript script, and just for fun, in cmd.exe on my blog. I learned a ton, and I finished in third place.

At that point in my life, I was happy that I was not in first or second place for the simple reason that there was no way I could get to TechEd. And the prize I won for third place was mind-blowing. A summary of the 2010 Scripting Games from my perspective: wonderful prizes, a lot of new skills grabbed along the way to the final score, and a lot of fun when I was trying to find solutions. All for free, plus a piece of my spare time. It was totally worth it.

Flashback: May 6, 2011...

I’m on air again. It does not matter that it is 2:30 AM here where I live. My brain does not mind the clock—it’s charged with adrenaline.

Since the 2010 Scripting Games, I haven’t stopped. I started my blog shortly before the Scripting Games took place, and I continue to share what I found out about Windows PowerShell there. I was reading books and chatting on the #PowerShell IRC channel (I attended the chat room during PowerScripting Podcast recordings). And of course, I was reading the Hey, Scripting Guy! Blog, and I attended Scripting Guys Forum! on Microsoft TechNet. All that helped me develop my skills, but I felt the biggest “jump” was already behind me. I needed a challenge…I needed the Scripting Games. And finally year 2011 arrived, and so did the 2011 Scripting Games.

Flashback: June 2, 2010...

This must be a dream. I’m on air. I will have a chance to answer the ultimate questions from Windows PowerShell Community. But wait—there’s more! I will ask questions to people whose names I pronounce with ALL CAPS. And I will speak to two guys that opened a lot of Windows PowerShell doors for me by inviting interesting guests and discussing interesting topics on their podcast. Luckily, my brain handled that well and I haven’t passed out.

The 2011 Scripting Games was really a tough competition. Not only I was struggling with the tasks and other competitors, I was struggling with myself and my ambitious soul that does not handle defeats very well. But eventually I managed to get to the last event—I sent my last script and started to wait. But regardless of the results, I was satisfied. I managed to complete all the tasks in the Advanced category, and I knew that only few contestants did the same. I also tried to share with others what I thought about their scripts. I could not grade them, but I could add some comments. That was great too—maybe even better than writing my scripts?

Eventually, I won. And all the Flashbacks that I mentioned in this blog resulted from me participating in the Scripting Games. My experiences are due to the knowledge I gained when I was participating in the games and my urge to learn more between events to increase my chances to win next time. I was encouraged by the prizes, like the TechEd invitation and an interview on the PowerScripting Podcast with Jeffrey Snover and Ed Wilson. I was grateful for others letting me know what I did wrong, so I went out and did the same for others. I developed a feeling for the strong and friendly community around Windows PowerShell, which I wanted to participate in during my spare time.

Now I’m a Windows PowerShell MVP, I’m a Scripting Games winner, and hopefully, I am a judge in the 2012 Scripting Games. Would you like to follow the same route?

~Bartek

Thank you, Bartek, for a truly inspirational blog post! For those of you who are wondering how to get started…

Maybe next year, you will be telling your story about how you won the 2012 Scripting Games, won a pass to TechEd, and became an MVP. Don’t say it can’t happen—it just did!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Copy Files to a Shared Drive

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to copy a script collection to a shared network drive.

Microsoft Scripting Guy, Ed Wilson, is here. Well, it is Monday in Charlotte, North Carolina, in the United States. Today is a cardio day. I spend the day running around going from meeting to meeting. I also spend a significant amount of time jumping through hoops to meet various deadlines for items that have no lead-time. The end result is a great workout that expends several hundred calories. Like running on a treadmill, it is a bit difficult to see any actual forward progress. But hey, such things are often necessary.

Note   If you missed yesterday’s guest blog by Windows PowerShell MVP, Bartek Bielawski, How I won the Scripting Games, a Pass to TechEd, and Became an MVP, you should go back and read it. It is truly inspirational.

Anyway, with a significant amount of time taken up by the system idle process, it is important that the remaining processes are efficient. One problem I have always had involves finding scripts I have written. For one thing, I have a hard time remembering what scripts I have written, and if I do not remember having written a script, it is hard to search for it. To put it another way…Often I end up browsing for my scripts, rather than searching for them. This is one reason I give my scripts such descriptive names—to make it easier for me to recognize them once I find them.

In the image that follows, I show the script folder that contains the scripts I wrote for the Windows 7 Resource Kit that was published by Microsoft Press.

Image of command output

For most of the chapters, I wrote an average of 15 scripts. But for some of the chapters, I did not write any scripts, and for other chapters I wrote as many as 40 scripts. In addition, each collection of scripts is related to a particular topic. Therefore, if I need to find the script I wrote that sets a static IP address, subnet mask, default gateway, and DNS server, I spend a lot of time clicking or I use the search to attempt to find the script.

For me, anyway, it is easier to look in a single folder for a script titled something like Set-StaticIPAddress.ps1. The thought of clicking through 35 folders and copying and pasting to another folder, however, really creeps me out—not to mention that my wrist and clicky finger would probably give out about half way through the process. No, this is a job for Windows PowerShell, not for the mouse. Yep, the pen is more powerful than the sword, and Windows PowerShell is more powerful than the mouse.

It took me less than a minute to come up with the following command.

dir C:\data\BookDOcs\Win7ResKit\Scripts -Recurse -Filter *.ps1 |

% { copy-item -Path $_.fullname -Destination \\hyperv1\shared\scripts }

The command takes advantage of aliases to shorten the command. The full version of the command is shown here:

Get-ChildItem –path C:\data\BookDOcs\Win7ResKit\Scripts -Recurse -Filter *.ps1  |

Foreach-Object  { copy-item -Path $_.fullname -Destination \\hyperv1\shared\scripts }

There are a couple of things to keep in mind about this command. The first thing is that when I specify the path to copy the files, wild cards are permitted. Therefore, one might expect that a command such as the following would work. It does not because it would point to a specific file—so the path must point to a folder that serves as the starting point.

dir C:\data\BookDOcs\Win7ResKit\Scripts\*.ps1 –Recurse

In my command, I could have used the Include parameter instead of the Filter parameter because the Include parameter modifies the Path parameter. Therefore, the command that is shown here states that I want to start at the \scripts directory and burrow down until I reach the bottom (that is the Recurse portion of the command). I then want to include only the files that end with an extension of ps1. When you use the Include parameter, you need to use the Recurse switch for it to be effective.

dir C:\data\BookDOcs\Win7ResKit\Scripts\ –Recurse -include *.ps1

Instead of using the Include parameter, I decided to use the Filter parameter. The idea is that the Filter should be more efficient because the provider should filter the files before returning them to Windows PowerShell, instead of returning everything to Windows PowerShell and causing Windows PowerShell to do the filtering.

To test this idea, I use the Measure-Command cmdlet. First I test the Include statement. Here is the command I run.

measure-command {dir C:\data\BookDOcs\Win7ResKit\Scripts\ –Recurse -include *.ps1}

The results state that the command took 88 milliseconds—an impressive score. The command and associated output are shown in the image that follows.

Image of command output

Following a reboot (to take care of any caching advantages), I use the Measure-Command cmdlet to see the performance of the Filter parameter. Here is the command I use.

Measure-Command {dir C:\data\BookDOcs\Win7ResKit\Scripts -Recurse -Filter *.ps1 }

The results state that the command tool 107 milliseconds (19 milliseconds longer). Keep in mind that the Measure-Command cmdlet is not accurate at the millisecond level; therefore, the results essentially state that the two commands took basically the same. The command and the output from the command are shown here.

Image of command output

You should keep in mind that this small test, certainly it is not conclusive. You should not rely on it when you need to move large amounts of massive data. But for small operations, such as the one I just performed, use either the Filter or the Include parameter, whichever one you are most comfortable with. After all, if it takes you an extra five minutes to get your command working just because you think that Filter will be faster, you just squandered your 19 millisecond advantage big time.

I hope you have a great day and an awesome week. I look forward to seeing you tomorrow.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Configure Static IP and DNS Settings

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to set the static IP and DNS addresses on a server.

Microsoft Scripting Guy, Ed Wilson, is here. One of the really cool things about computers is that you never get bored. At least for me this is true. For example, I have a server that has been running Exchange Server absolutely perfectly for more than a year. Today, it acted like a 150 pound St. Bernard that had become bored. That’s right, it threw a fit. Why did it do so? Well, I had changed the IP network configuration, and I did not change the IP address on this machine. For some reason, the IP changes caused a race condition in Exchange Server, and I could hardly get control of the box. I logged on to the computer, but I was unable to use the graphical tools to set a new IP address on the box—things would spin in circles, and then disappear. Dude, what now?

Well, I thought I would run my Set-StaticIPAddress script on the machine to set the address, but there were two problems:

  1. The script execution policy in my Exchange Server does not permit the running of scripts.
  2. The server is completely isolated and cannot contact my script share.

So what can I do? Well, I opened the Windows PowerShell ISE and typed the commands that follow into the script pane.

SetStaticIP.ps1

$wmi = Get-WmiObject win32_networkadapterconfiguration -filter "ipenabled = 'true'"

$wmi.EnableStatic("10.0.0.15", "255.255.255.0")

$wmi.SetGateways("10.0.0.1", 1)

$wmi.SetDNSServerSearchOrder("10.0.0.100")

This is the cool part, even when the Windows PowerShell script execution policy it set to Restricted, which disallows the execution of scripts, it is still possible to open the Windows PowerShell ISE and run commands. This technique allowed me to type the four previous commands, and to execute them all at once.

What do the four previous commands do?

The first command uses the Get-WmiObject cmdlet to retrieve the network adapter configuration for all the network adapters that are enabled for use with IP. To do this, I use the Win32_NetworkAdapterConfiguration WMI class. This class has a number of extremely useful methods—that is, it can do a lot! I store the resulting object in a variable I called $wmi. This line of code is shown here.

$wmi = Get-WmiObject win32_networkadapterconfiguration -filter "ipenabled = 'true'"

When the network adapter configuration object is stored in the $wmi variable, it is easy to use the following three methods: EnableStatic, SetGateways, and SetDnsServerSearchOrder. All that is required is to supply the required values. The EnableStatic method requires the static IP address in addition to a subnet mask. Each of these values are strings. The SetGateways method also requires two strings. The first parameter is the IP address of the gateway, and the second parameter is the metric. The last method I used is the SetDnsServerSearchOrder method. This code is shown here.

$wmi.EnableStatic("10.0.0.15", "255.255.255.0")

$wmi.SetGateways("10.0.0.1", 1)

$wmi.SetDNSServerSearchOrder("10.0.0.100")

The figure that is shown here illustrates the four lines of code, and the output from those commands.

Image of command output

When I ran the code, I could verify the IP address via the GUI tool. The actual IP configuration is shown here.

Image of IP properties

I am not sure why the race condition appeared in my Exchange Server. My guess is that it was trying really hard to contact domain controllers, DNS servers and the like, and it was unable to do so because of the IP changes. When I fixed the problem, Exchange Server settled down. The cool thing is that I was able to use Windows PowerShell to fix the problem, even though I could not run a script. The four commands I used are the essence of a script, but because I did not save them as a .ps1 file prior to execution, Windows PowerShell saw them as just another group of commands.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Use PowerShell to Replace netdom Commands to Join the Domain

$
0
0

Summary: Learn how to replace netdom commands with simple Windows PowerShell cmdlets to rename and reboot the computer or join the domain.

Hey, Scripting Guy! Question Hey, Scripting Guy! It seems that I have been hand building a number of computers recently for a computer lab we are setting up at work. I have written a batch file that uses netdom commands to join the domain. I also use a netdom command to rename the computer, and the shutdown command to restart the computer. The syntax for each of these three commands is rather complex and convoluted. A strange thing is that it seems I can do this on Windows Server R2, but I cannot do this on Windows 7. What gives?

—AD

Hey, Scripting Guy! Answer Hello AD,

Microsoft Scripting Guy, Ed Wilson, is here. Well this afternoon I am drinking something a bit different. I decided to make a cup of masala chai. (The word chai, or many of its variations, simply means tea in many languages. Therefore, to speak of chai tea is redundant.) Anyway, I decided to use Dajarling tea, brewed a little strong, and I added cloves, cardamom, a cinnamon stick, fresh ground pepper, and 1/3 cup of warm milk. Coupled with an Anzac biscuit, it was quite nice.

AD, the reason that you cannot use your batch file (containing netdom commands) on Windows 7 is that by default Windows 7 does not contain the netdom command. You can add netdom to your computer running Windows 7 by installing the latest version of the Remote Server Administration Tools (RSAT). When it is installed, you still need to go to Programs and Features and turn on the tools you want to load. The RSAT tools are great, and that is where you gain access to the Active Directory module. But you should not load the RSAT only to access netdom, because you can do what you want to accomplish out of the box (assuming that your box is not Windows 7 Home edition that does not join domains).

AD, your batch file contained at least three commands to rename the computer, join the domain, and to restart the machine. The two netdom commands and the shutdown command are shown here.

netdom renamecomputer member /newname:member1.example.com /userd:administrator

netdom add /d:reskita mywksta /ud:mydomain\admin /pd:password

shutdown /r

In Windows PowerShell 2.0, this is still three commands, but at least the commands are native to Windows 7. In addition, the Windows PowerShell command is easier to read, and they support prototyping. An example of using Windows PowerShell to add a computer to the domain, rename the computer, and reboot the machine is shown here.

(Get-WmiObject win32_computersystem).rename("newname")

add-computer -Credential iammred\administrator -DomainName iammred.net

Restart-Computer

In the first command, I use the Get-WmiObject cmdlet to retrieve the Win32_ComputerSystem Windows Management Instrumentation class. (The Get-WmiObject cmdlet has an alias of gwmi, and it will also take credentials if required.) Because this class returns only one instance, I can use my group and dot trick (see My Ten Favorite Windows PowerShell Tricks) to directly call the Rename method to rename the computer.

After I rename the computer, I use the Add-Computer cmdlet to join the computer to the domain. The Add-Computer cmdlet allows me to specify the credentials that have rights to add computers to the domain, in addition to the name of the domain to join. Although I did not do it in my example, there is also an ou parameter that allows you to specify the path to the OU that will contain the newly created computer account.

The last command, Restart-Computer, appears without any parameters. This means that the computer will restart within one minute, and it will attempt to cause processes to politely exit (generally a good thing). For emergency type of situations, there is the Force switch that will cause the computer to immediately restart, and not wait on processes to politely exit. The use of this optional parameter can lead to data loss in some situations.

In the image that follows, I first use the Get-WmiObject cmdlet to rename the computer. The image shows the return value is 0, which means that the command completed successfully. Next, I use the Add-Computer cmdlet to join the computer to the iammred domain by using the administrator credentials. Upon hitting ENTER, a dialog box appears that requests the password for the credentials.

The command completed successfully, but a warning message states that a reboot is required for the change to actually take place. The last command shown in the image uses the Restart-Computer cmdlet to restart the computer. I added the WhatIf parameter to illustrate what happens when using the WhatIf parameter (and to permit myself time to make the screenshot).

Image of command output

After I remove the WhatIf switch, and rerun the Restart-Computer cmdlet, a message box appears that states the computer will shut down in a minute or less. After the quick reboot, I am able to switch from using a local account to a domain account, because the computer has now joined the domain. The commands are short, sweet, easy to remember, and easy to use. None of these commands require a script, in fact, they could easily be run as imported history commands. For more information about working with the Windows PowerShell history cmdlets, see this collection of Hey, Scripting Guy! blogs.

 AD, that is all there is to using Windows PowerShell to rename a computer and to join it to the domain.  Join me tomorrow for more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


The Easy Way to Use PowerShell to Move Computer Accounts

$
0
0

Summary: Use the Active Directory module and Windows PowerShell to move computer accounts.

Hey, Scripting Guy! Question Hey, Scripting Guy! I need to figure out a way to manage computer accounts in Active Directory. We are running Windows Server 2008 R2. I have seen some VBScript scripts to manage computer accounts, but they are rather complicated. In fact, many of the Windows PowerShell scripts that I have seen look the same way. I do not want to install any third-party stuff (due to company security policy). What I want to do immediately is move computers from one OU to another one. Is there an easy way to do this?

—DJ

Hey, Scripting Guy! AnswerHello DJ,

Microsoft Scripting Guy, Ed Wilson, is here. Well, it is a perfect day down here in Charlotte, North Carolina in the United States. In fact, I would say it is days like this that make up for the heat and humidity of the summer down here. Anyway, I am sitting with my laptop on the lanai, sipping a cup of Jasmine Dragon Pearl tea, and checking the email that has been sent to scripter@microsoft.com, and DJ, I ran across your email.

DJ, you are right that some Windows PowerShell scripts to manage computer accounts look amazingly like VBScript scripts that perform the same task. This is because at their heart, they use the same technology. I have written many such scripts myself. There are several advantages to this approach—the first is that they are certain to work and they have no external dependencies. In fact, such scripts work across the broad spectrum of servers. The only dependency is having at least Windows PowerShell 1.0 on the computer. The second advantage is that if one is very familiar with ADSI scripting, from a VBScript or other automation background, the Windows PowerShell version is immediately understandable. That is, there is no learning curve if you already know and understand ADSI.

The two prior advantages aside, if you have the necessary infrastructure (and you do), there is absolutely no reason not to use the cmdlets from the Active Directory module. In fact, by getting familiar with them now, you are putting yourself in a great position to move forward in the future.

Note   I have written several blogs that detail working with the Active Directory module.

In addition to being available when you enable the Active Directory Domain Services (AD DS) role on a computer running Windows Server 2008 R2, you can also install the Active Directory Management Service (see Install Active Directory Management Service for Easy PowerShell Access). To use the cmdlets, you can install the Remote Server Admin Tools (RSAT) on your computer running Windows 7, or you can use Windows PowerShell remoting (see What's up with Active Directory Domain Services Cmdlets?).

By using the Active Directory module, it is super easy to manage computer accounts. If you have the RSAT tools installed, the first thing to do is to import the Active Directory module by using the Import-Module cmdlet. If the RSAT tools are not installed, use Windows PowerShell remoting and create a remote session to a computer that has the Active Directory module. To do this, use the Enter-PSSession cmdlet. This is the option I used in the image that follows.

Image of command output

In the image that follows, I need to move the Win7-c1 computer from the Test Organizational Unit to the Charlotte Organizational Unit.

Image of folder

To do this by using Windows PowerShell and the AD DS cmdlets is relatively easy. I can use the Move-ADObject cmdlet. The Move-ADObject cmdlet uses the Identity parameter to identify the object to move. The Identity parameter accepts either DistinguishedName or ObjectGuid to identify the object to move. While it is possible one might know the actual DistinguishedName of a computer, it is very unlikely that one would know the ObjectGuid. But even knowing the DistinguishedName of an object does not mean it is easy to type. Luckily, the Get-ADComputer does not have these restrictions. It is easy to use the Get-ADComputer cmdlet to retrieve the information needed for Move-ADObject. The following command shows the output from Get-ADComputer.

[dc3]: PS C:\> Get-ADComputer win7-c1

DistinguishedName : CN=WIN7-C1,OU=test,DC=iammred,DC=net

DNSHostName       : WIN7-C1.iammred.net

Enabled           : True

Name              : WIN7-C1

ObjectClass       : computer

ObjectGUID        : e922119e-377e-4eef-a4db-aff340ac0022

SamAccountName    : WIN7-C1$

SID               : S-1-5-21-1457956834-3844189528-3541350385-1134

UserPrincipalName :

One way to use the Get-ADComputer cmdlet is to have it retrieve the ObjectGuid for you. This is shown here (keep in mind that this is a single line command, and I have not included any line continuation marks).

Move-ADObject -Identity (Get-ADComputer win7-c1).objectguid -TargetPath 'ou=charlotte,dc=iammred,dc=net'

It might be easier—and certainly it is easier to understand—to use the pipeline. When using the pipeline to move the computer to another organizational unit, use the Get-ADComputer cmdlet to retrieve the computer object, and then pipel it to the Move-ADObject cmdlet. This command is shown here.

get-adcomputer win7-c1 | Move-ADObject -TargetPath 'ou=charlotte,dc=iammred,dc=net'

The command and the associated output from the command (there is no output from the command) are shown in the image that follows.

Image of command output

If you do not know the distinguished name of the OU, use the Get-ADOrganizationalUnit cmdlet to find the distinguished name attribute. This technique is shown here.

Note   I split the command into two separate commands due to the length of the command. But it would be possible to do this as one really long command by grouping the Get-ADOrganizationalUnit command, and then using dotted notation to retrieve the DistinguishedName attribute.

$target = Get-ADOrganizationalUnit -LDAPFilter "(name=charlotte)"

get-adcomputer win7-c1 | Move-ADObject -TargetPath $target.DistinguishedName

DJ, that is all there is to using the Active Directory module to move computers from one OU to another one.  Join me tomorrow when I will talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Reset the Secure Channel on a Desktop

$
0
0

Summary: Learn three ways to use Windows PowerShell to reset the computer secure channel.

Hey, Scripting Guy! Question Hey, Scripting Guy! We have a problem with the computers in our computer classroom. We set up this classroom to teach new hires how to use our mission critical application. We are, however, not hiring as many people as we used to, and as a result it keeps getting longer and longer between classes. Now, we do not leave the computers turned on between classes, and often it takes an entire day to get the computers back on the domain when we decide to have another class. I researched the problem, and I have determined that the issue is with the computers being turned off for more than 30 days and the computers missing the secure channel password reset. Is there anything you can do to help?

—AP

Hey, Scripting Guy! AnswerHello AP,

Microsoft Scripting Guy, Ed Wilson, is here. The great thing is that we released the Window Server “8” Beta, so I can finally talk about Windows Server “8” Beta just a little bit. I am sitting at my laptop sipping a cup of peach and hibiscus leaf tea (it is naturally sweet and caffeine free) and playing with Windows Server “8” Beta in a virtual machine. It is very cool—especially from a Windows PowerShell perspective.

AP, I have seen your problem many times. From firing up preconfigured servers, to starting up virtual machines that have been turned off for extended times, to desktop machines that are turned off for more than 30 days. In some countries (not the United States) where workers get several weeks of vacation, it is not uncommon for a worker to take four weeks off at a stretch. (This could also be the situation in a job share arrangement.) When the worker comes back, the computer does not talk to the domain.

There are a couple of ways to handle this. One way is to increase the amount of time between the changes of the secure channel password (but I do not recommend this). Another way is to remove the computer from the domain, reboot the computer, join the computer to the domain, and reboot again. On my laptop (where it takes nearly 10 minutes for the laptop to become usable after a reboot), we are talking about a 30 minute process.

There are other alternatives to this multiple reboot scenario. Each of these solutions could easily be placed into a Windows PowerShell script.

Use netdom to reset the secure channel

Netdom is a multipurpose tool that started life as a resource kit utility. It grew up, and was added to the operating system. The problem is that it is not a default part of the client operating system. In Windows Server 2008 and Windows Server 2008 R2, netdom is available when the Active Directory Domain Services role (AD DS) is added. In Windows 7, access to netdom becomes available when you install the Remote Server Administration Tools (RSAT). The syntax can be a bit tricky with the forward slashes and colons, but it can be done from within Windows PowerShell. Make sure the Windows PowerShell console runs with admin rights prior to executing the command. A sample of the syntax is shown here.

netdom reset /d:devgroup.contoso.com mywksta

The disadvantage to using netdom is that it is not likely to be available on client workstations unless the RSAT is installed.

Use Test-ComputerSecureChannel

The Active Directory module (see yesterday’s blog) contains a cmdlet named Test-ComputerSecureChannel. When used, it returns a Boolean value if the secure channel is working properly. This use is shown in the following image.

Image of command output

If the Test-ComputerSecureChannel cmdlet returns False, use the Repair switch to repair the secure channel. One way to automate this would be to create a scheduled task that executes on startup and runs the Windows PowerShell command that is shown here.

if(!(Test-ComputerSecureChannel)) {Test-ComputerSecureChannel -Repair}

Of course there is one major problem with this approach: You need access to the Active Directory module. On a client computer running Windows 7, this means installing the RSAT. On the server, it means adding the AD DS role. On Windows XP, it means you are out of luck. Of course, you can enter a remote Windows PowerShell session, load the Active Directory module, and use the cmdlet. But then of course, that is not checking the secure channel on the local machine, but rather the one on the server to which you just connected. There is no ComputerName parameter available for the cmdlet.

In the Windows Server “8” Beta, the Test-ComputerSecureChannel shows up by default. It is already installed and available. In Windows PowerShell 3.0 (which is in Windows Server “8” Beta), I do not even need to load any special module because the module that contains the cmdlet loads automatically on first use. The image that is shown here illustrates using the Test-ComputerSecureChannel in Window Server “8” Beta to test the secure channel.

Image of command output

Therefore, if you have access to the Test-ComputerSecureChannel cmdlet, it is certainly the easiest way to reset the secure channel.

Use Nltest

The Nltest command will work inside Windows PowerShell, and it is installed by default on Windows 7 and Windows Server 2008 R2. To gain access to nltest in Windows Vista or earlier versions of Windows it is necessary to install the admin tools.

Because nltest exists by default in Windows 7, Windows Server 2008 R2, and Windows Server “8” Beta, it is a good choice to use from an automation perspective. When you migrate everything to Windows Server “8” Beta (assuming that the Test-ComputerSecureChannel cmdlet exists in the RTM product), it will be the easiest to use. There are lots of switches and various ways of using nltest, but there is one command that will test the secure channel, and if it needs to be repaired, it will repair the channel. This command is shown here.

nltest /sc_verify:iammred

The image that follows illustrates using the command and the output that arises from the command.

Image of command output

AP, that is all there is to using Windows PowerShell to reset the secure channel on workstations. Join me tomorrow for more Windows PowerShell coolness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Collect, Store, and Parse IIS Log Data

$
0
0

Summary: Guest blogger, Microsoft PFE Chris Weaver, shows how to use Windows PowerShell to collect, store, and parse IIS log data.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have back with us Chris Weaver.

Photo of Chris Weaver

I have been working at Microsoft since late 2008, during that time I have been an engineer within CSS, a SharePoint 2010 TAP, and most recently I have become a dedicated premier field engineer working with several of our premier customers to support their SharePoint infrastructure. I have been using Windows PowerShell for the last two years to simplify the administration and troubleshooting of SharePoint for my customers. I enjoy camping with my family and kite surfing in my spare time. (Yeah, right who has any of that...)

Blog: Wondering Mind (about issues with SharePoint and its supporting infrastructure)

Raise your hands if you recently tried to parse through your IIS logs to get an answer. Did you find it easy? Did you still have the correct log files? One of my customers recently brought this problem to me saying that they were not happy with any of the current methods and wondered if I could make something to work with Windows PowerShell.

They wanted to be able to remove IIS logs from their web front ends when they needed to and still maintain a long-term repository of this rich and valuable data. They had already centralized the IIS logs into one folder, which had the following structure of a parent folder for the repository and then a folder for each web application that contained files from the web front ends server.

 Image of folder

With the file collection already solved for me, I started on the script. I realized there where a few things I would have to accomplish: 

  • Create database and table structures.
  • Extract data from the files. This is very simple because the IIS logs were Tab delimited with each entry on its own line. 
  • Clean-up the data.
  • Import the data into SQL Server.

It all turned out to be a lot simpler than I thought it would be. I started off by writing functions to do the following: 

Create a database

This function uses the Smo.Database class to get all the databases, and then it enumerates all the databases on SQL Server. It compares the database name to the one provided, and if it finds no match, I create the database. In either case, I return the database object.

function Create_Database

{

param($SQLSvr, [string]$DatabaseName, [string]$DBServer)

 

foreach($db in $SQLSvr.Databases) # Check to see if our Database exists

{

if($db.Name -eq $DatabaseName)

{

return $db

}

}

$db = New-Object Microsoft.SqlServer.Management.Smo.Database($SQLSvr, $DatabaseName)

$db.Create()

return $db

}

Create a table

By using the StringCollection class, I add a SQL Create Table statement to a string and then pass the string to my Execute Statements function. In that statement, I follow the article 296085 in the Microsoft Knowledge Base to create all the correct columns. Be aware that if you want to change the type of IIS log that you use, you need to change the columns that you create.

function Create_Table

{

param($DB, [string]$TableName)

 

$TableScript = New-Object -Type System.Collections.Specialized.StringCollection

$TableScript.Add("CREATE TABLE [dbo].[$TableName] ([date] [datetime] NULL,[time] [datetime] NULL ,[s-sitename] [varchar] (255) NULL,[s-computername] [varchar] (255) NULL ,[s-ip] [varchar] (50) NULL ,[cs-method] [varchar] (50) NULL ,[cs-uri-stem] [varchar] (512) NULL ,[cs-uri-query] [varchar] (2048) NULL ,[s-port] [varchar] (255) NULL ,[cs-username] [varchar] (255) NULL ,[c-ip] [varchar] (255) NULL ,[cs-version] [varchar] (255) NULL ,[cs(User-Agent)] [varchar] (512) NULL ,[cs(Cookie)] [varchar] (4096) NULL ,[cs(Referer)] [varchar] (2048) NULL,[cs-host] [varchar] (255) NULL ,[sc-status] [int] NULL ,[sc-substatus] [varchar] (255) NULL,[sc-win32-status] [varchar] (255) NULL,[sc-bytes] [int] NULL ,[cs-bytes] [varchar] (255) NULL ,[time-taken] [int] NULL)") | Out-Null

Database_ExecuteNonQuery_Command $DB $TableScript #Create Table

}

Execute statements

Executes any non-query statement that you provide it as a string through the ExecuteNonQuery class. 

Note   The statement cannot perform queries on the database like SELECT.

function Database_ExecuteNonQuery_Command

{

param($SQLDataBase, $CommandScript)

$Error.Clear()

$ExecutionType = [Microsoft.SqlServer.Management.Common.ExecutionTypes]::ContinueOnError

$SQLDataBase.ExecuteNonQuery($CommandScript, $ExecutionType)

 

trap {Write-Host "[ERROR]: $_"; continue}

}

Clean the log files

The IIS logs have headers and other lines that we do not want to import into the database. This function will get all the lines from the file by using Get-Content. By using Select-String, it removes any lines that contain a Regular Pattern, and then rewrites the file with all the good lines by using Set-Content.

function Clean_Log_File

{

param ($LogFile)

$Content = Get-Content $LogFile.FullName | Select-String -Pattern "^#" -notmatch

Set-Content $LogFile.FullName $Content

}

Then I started on the main logic 

  • Do a little bit of error checking (this is a great place for you to improve because I have done only a little bit)
  • Add the type Microsoft.SQLServer.Smo and create a connection to my SQL Server
  • Create my database or find the preexisting one
  • Get all folders in the path provided
  • Create my table (one per subfolder)
  • Load all the files
  • Clean the file
  • Load the cleaned file into SQL Server

$LineScript = New-Object -Type System.Collections.Specialized.StringCollection                                        

$LineScript.Add("BULK INSERT $Database.[dbo].[$TableName] FROM `"$File`" WITH (BATCHSIZE = 10,FIRSTROW = 1,FIELDTERMINATOR = ' ', ROWTERMINATOR = '\n')") | Out-Null

Database_ExecuteNonQuery_Command $Database $LineScript

Rename the file

I rename the file with the .old extension, so I do not read the file more than once.

Rename-Item $File (($LogFile.FullName).TrimEnd($LogFile.extension) + ".old")        #Insure we don't add contents of file to table again

Note   You need to run with elevated permissions to be able to write to the database. You can do this by typing runas when you open Windows PowerShell, or read my blog post about how to do this with Task Manager.

If everything works well, you should see the following in SQL Management Studio.

Image of command output

After the script finishes running, you will be able to run Select statements and other SQL queries against your database to find information such as: 

  • Top users
  • Top sites
  • Top five users getting unauthorized access to sites

Watch for the next post when I will develop a script that automatically runs different reports from the information that we have collected.

~Chris

Thanks Chris. This has been a great blog. The script can be found in the Script Repository.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Keep Your PowerShell Versions Straight and Avoid Errors

$
0
0

Summary: Learn how to keep your Windows PowerShell versions straight, and avoid errors while using a one-line command to add computers to domain.

Microsoft Scripting Guy, Ed Wilson, is here. Earlier this week I wrote Use PowerShell to Replace netdom Commands to Join the Domain. It turned out to be a very popular post. To be honest, however, when I started writing that blog, I intended to show that it could be done as a one-liner. This is because I have been using Windows Server “8” Beta for long time now, and I was used to some of the new parameters in the Add-Computer cmdlet.

But I will predict that for the next several months, things are going to be a bit squirrely as people attempt to balance working with Windows PowerShell 2.0 with the various iterations of Windows PowerShell 3.0. It happened in the transition between Windows PowerShell 1.0 to Windows PowerShell 2.0, and I am sure it will happen again this time around.

To be sure, Windows PowerShell 3.0 in Windows Server "8" Beta brings much goodness to the table, and it also simplifies syntax in a great many areas. Don’t panic, though. If you don’t know if something you wrote in Windows PowerShell 3.0 will work in Windows PowerShell 2.0, just go ahead and add the Requires statement to your script, and you will be safe. The use of the Requires statement is shown here.

#Requires -version 3.0

Yep, you are seeing that right; it is preceded with a pound character (the normal comment character). When I attempt to run a script that uses new functionality, and I know that the new functionality only resides in a specific version of Windows PowerShell, I add the Requires statement to the first line in the script. When the script runs on a down-level system, a message displays. The use of the Requires statement, and the accompanying message are shown in the image that follows.

Image of command output

Keep in mind that the Requires statement only works in scripts, not in functions, cmdlets, or snap-ins. In addition, when developing in the Windows PowerShell ISE, if the script is not saved, it is not considered a script. Therefore, it will not work. In the image that follows, an error appears stating that a ComputerName parameter cannot be found, instead of the version information that was presented in the previous image.

Image of command output

Removing a computer from the domain in Windows Server “8” Beta is a one-line command. It also reboots the computer. In addition, the ComputerName parameter permits the command to accept an array of remote computer names. Here is the basic command to remove a computer from the domain, join a workgroup called myworkgroup, and reboot.

Remove-Computer -computername win8c5 -workgroup myworkgroup –restart

As shown in the image that follows, using the Remove-Computer cmdlet without the Force switched parameter causes a warning message to appear. With the Force switched parameter in effect, no warning appears.

Image of command output

Now…to get back to adding a computer to a domain by using Windows PowerShell. In Windows PowerShell 3.0, the Add-Computer cmdlet gains additional parameters. One useful parameter is the Restart parameter. This permits the use of a one-line command to add a computer to the domain. The use of this feature is shown here. (The following command is a single command; I have not added any line continuation characters to it.)

#Requires -version 3.0

Add-Computer -DomainName iammred -Credential iammred\administrator -restart -OUPath 'ou=charlotte,dc=iammred,dc=net'

The image that follows illustrates using the previous command to add a computer to the domain.

Image of command output

If you need to rename the computer while adding it to the domain, the command would appear as follows:

Add-Computer -DomainName iammred -Credential iammred\administrator -restart -OUPath 'ou=charlotte,dc=iammred,dc=net' –newname mynewcomputername

Keep in mind that Windows PowerShell 3.0 and Windows Server “8” Beta are beta software, and as such the features will change. But I hope you will download them and let us know how you like them. There is some good stuff here.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Rename Active Directory Sites

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson shows how to use the Active Directory PowerShell cmdlets to query and to rename a site.

Hey, Scripting Guy! Question Hey, Scripting Guy! I am getting excited about using Windows PowerShell to manage Active Directory. But there are some things that are really easy to do in the GUI that I cannot seem to be able to do by using the cmdlets. Is there some reason for this?

—RC

Hey, Scripting Guy! AnswerHello RC,

Microsoft Scripting Guy, Ed Wilson, is here. Well, things are certainly looking up around Charlotte, North Carolina in the United States. In just a few days, the Scripting Wife and I will head north to Columbus, Ohio to participate in the first ever PowerShell Saturday. This event is already a tremendous success—it sold out in less than three weeks, and the Central Ohio PowerShell User Group has added ten new users—with the event yet to happen. In fact, there have been several requests to host such an event in various cities around the world—and like I said, the day has not yet arrived. Success? I most certainly think so. Much of the credit goes to Wes Stahler, the president of the Central Ohio PowerShell User Group, in addition to Ashley McGlone, Brian Jackett, and the Scripting Wife.

On Monday, following PowerShell Saturday, I begin a series of five Live Meetings called, PowerShell for the Busy Admin. This is a special TechNet webcast that is part of the Road to TechEd. It is also my intention to help you hone your Windows PowerShell skills prior to the 2012 Scripting Games.

Along the way, there will also be a new Windows PowerShell quiz. If you have not taken the 2011 Scripting Games quiz, you should. It is a great learning tool, and has been taken by thousands of your peers.

RC, you are right. In the Active Directory Sites and Services MMC, it is easy to rename a site. All you need to do is to right-click the site and select Rename from the action menu. By default, the first site is called Default-First-Site-Name, which is not too illuminating. The GUI way to rename a site is shown in the image that follows. 

RC, to work with Active Directory Sites and Services, it is necessary to understand that they are a bit strange. First of all, they reside in the configuration naming context. Connecting to this context by using the Active Directory module is rather simple. All I need to do is use the Get-ADRootDSE cmdlet, and then select the ConfigurationNamingContext property. First I make a connection to my domain controller and import the Active Directory module. This is shown here.

Enter-PSSession -ComputerName dc3 -Credential iammred\administrator

Import-Module activedirectory

Here is the code that will retrieve all of the sites. It uses the Get-ADObject cmdlet to search the configuration naming context for objects that are the class of Site.

Get-ADObject -SearchBase (Get-ADRootDSE).ConfigurationNamingContext -filter "objectclass -eq 'site'"

After I have the site I want to work with, I change the DisplayName attribute. To do this, I pipe the site object to the Set-ADOObject cmdlet. The Set-ADOObject cmdlet allows me to set a variety of attributes on an object. This command is shown here. (This is a single command that is broken into two pieces at the pipe character.)

Get-ADObject -SearchBase (Get-ADRootDSE).ConfigurationNamingContext -filter "objectclass -eq 'site'" | Set-ADObject -DisplayName CharlotteSite

After I have set the DisplayName attribute, I decide to rename the object. To do this, I use another cmdlet called Rename-ADObject. Once again, to simplify things I pipe the Site object to the cmdlet, and I assign a new name for the site. This command is shown here. (This is also a one line command broken at the pipe character.)

Get-ADObject -SearchBase (Get-ADRootDSE).ConfigurationNamingContext -filter "objectclass -eq 'site'" | Rename-ADObject -NewName CharlotteSite

The commands I used, as well as any associated output appear in the figure that follows.

Image of command output

I decide to go back to the MMC to verify that the name change took place. Normally, I would just rerun the site query, but because I already had the MMC open, it was simple enough to do (besides it makes a colorful picture).

Image of folder

RC, that is all there is to using Windows PowerShell to rename a site. Tomorrow I will talk about more cool Windows PowerShell stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Use PowerShell to Manage Exchange Server Mailbox Storage Limits

$
0
0

Summary: Guest blogger, Jeremy Engel, shows how to use Windows PowerShell to manage mailbox storage limits on an Exchange Server.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a real special treat in store. The other day I received an email from Jeremy Engel (the author of PowerShell Module for DHCP, which is available on the Scripting Guys Script repository). Jeremy said that he had been wrestling with a problem at work, and he came up with a cool Windows PowerShell solution. I was immediately intrigued. I will let Jeremy tell you the rest of the story…

I had a problem. The existing database and mailbox storage quota/limit design (or lack thereof) in my company’s Exchange Server environment was not allowing my team to be agile and responsive enough to end-user storage requests or to maintenance issues with the databases. The problem was that we had no standardized way of managing storage limits. We would move mailboxes around and spend the next day resolving storage limit issues. I was taking a lot of heat, and I needed to come up with a solution that satisfied both end users and the Exchange Server administrators.

First, I needed to get an understanding of what the current environment looked like. To do so, I ran the following queries:

Get-MailboxDatabase | Select-Object Name,IssueWarningQuota,ProhibitSendQuota,ProhibitSendReceiveQuota | Sort-Object Name | Export-Csv –Path .\DatabaseLimits.csv –NoTypeInformation

Get-Mailbox | Select-Object DisplayName,Database,IssueWarningQuota,ProhibitSendQuota,ProhibitSendReceiveQuota | Sort-Object DisplayName | Export-Csv –Path .\MailboxLimits.csv –NoTypeInformation

As I discovered (much to my horror), the databases had no predictable storage limits. Some were set with warning limits, but no send limits; some were set with send limits, but no warning limits; some had what I would call “normal” limits; and still others had no limits at all. I even found some that actually had a receive quota! To make matters worse, many of the mailboxes themselves had varying storage limits, all of which were even more ad hoc and arbitrary. In short, it was a mess.

Certain mailbox databases were becoming too full, and we desperately needed to shuffle mailboxes around. As you can see from our lack of standardization, moving mailboxes around was an extremely tricky and tedious endeavor. What we needed was something agile, standardized, and easy to administrate. A little thought and Windows PowerShell got the job done! Here’s what I did…

Agility and standardization

My first goals were agility and standardization. To that end, I decided that all mailbox databases should have the same storage limits. Hence, any exceptions to these limits would be managed at the mailbox level. This would allow our Exchange Server administrators the freedom to move mailboxes as needed without worrying about causing end-user issues and dissatisfaction.

Up until this point, when the admins would receive a storage limit increase request, it was essentially at their discretion (or the end user’s), what the new limits for the mailbox would be. Instead, I came up with the concept of the StorageLevel. Here is what I developed as our environment’s storage levels:

StorageLevel     IssueWarning    ProhibitSend

0                              800MB                  850MB (Default/Database Limits)

1                              1.0GB                    1.2GB

2                              2.2GB                    2.4GB

3                              4.6GB                    4.8GB

4                              9.4GB                    9.6GB

5                              Unlimited            Unlimited

This would allow both users and administrators a standardized way of defining their storage limits and preventing confusion.

With the thinking done, I talked to my boss about the deplorable state of affairs and what my wonderfully graceful solution for this was. I got his buy-in, and he in turn, got buy-in from his bosses. This is key—always seek to get as much acceptance as necessary for a new idea. This makes execution and enforcement that much easier. Another good idea is to set increasingly more stringent requirements and approvals to increase the StorageLevel of a mailbox. For example, an increase from 0 to 1 might just require a manager’s approval, but an increase from 3 to 4 might require a business explanation and approval from the division leader.

Ease of administration

With acceptance complete, I got to work on ease of administration. I needed a way to report on and define the mailbox storage limits which would adhere to my new design. Therefore, I created two scripts for the administrators to use: Get-MailboxStorageLimit.ps1 and Set-MailboxStorageLimit.ps1. I wanted to maintain the look and feel of other Exchange Server cmdlets, so I used the following parameters:

Get-MailboxStorageLimit

[CmdletBinding()]

Param([Parameter(Mandatory=$false,ValueFromPipeline=$true)][PSObject]$Identity,

      [Parameter(Mandatory=$false)][string]$Database,

      [Parameter(Mandatory=$false)][string]$Server

      )

 

Set-MailboxStorageLimit

[CmdletBinding(DefaultParameterSetName="Manual")]

Param([Parameter(Mandatory=$true,ValuefromPipeline=$true)][PSObject]$Identity,

      [Parameter(Mandatory=$true,ParameterSetName="Manual")][ValidateRange(0,5)][int]$Level,

      [Parameter(Mandatory=$true,ParameterSetName="DynamicUp")][switch]$IncreaseLevel,

      [Parameter(Mandatory=$true,ParameterSetName="DynamicDown")][switch]$DecreaseLevel

      )

The Identity parameter can be piped, and I use the PSObject data type so that administrators can input a mailbox object or use any of the other standard ways we define Identity in Exchange. You’ll also notice the more advanced features in the Set-MailboxStorageLimit script because I want to control what integer values are available for the Level parameter, and also prevent “cross-parameterization.” I don’t know if that’s a word, but it sure sounds legit, doesn’t it?

Begin, process, end

Next, I use the begin, process, end functionality so that piping actually works the way we expect. In the begin section of both scripts, I define those limits I talked about earlier in a hash table.

begin {

  $limits = @{ 0 = @(800MB,850MB)

               1 = @(1.0GB,1.2GB)

               2 = @(2.2GB,2.4GB)

               3 = @(4.6GB,4.8GB)

               4 = @(9.4GB,9.6GB)

               5 = @("Unlimited","Unlimited")

               }

  }

In the process section, all the work gets done. I first validate that the mailbox(es) in question exist, and then I determine what their current storage level is based on the previously defined $limits hash table.

    if($mailbox.UseDatabaseQuotaDefaults) { $Level = 0 }

    else {

      $limit = $mailbox.ProhibitSendQuota.Value

      $warning = $mailbox.IssueWarningQuota.Value

      if(!$limit) { $Level = $limits.Count-1 }

      else {

        for($i=0;$i-lt$limits.Count-1;$i++) {

          if($limit -le $limits[$i][1]+1MB -and $limit -ge $limits[$i][1]-1MB) {

            $Level = $i

            if($warning -gt $limits[$i][0]+1MB -or $warning -lt $limits[$i][0]-1MB) {              

              $Level = $null

              }

            break

            }

          }

        if(!$Level) { $Level = "Invalid" }

        }

     }

I had to do a little trickery here with checking the limits because the byte values apparently come out differently between setting and getting the values—they will not exactly match up. I am not sure what Exchange Server is doing on the back end that causes this discrepancy, but it makes me sad. As such, I was not able to determine with 100% accuracy whether someone fits directly into a particular storage level, so I had to adjust the number by 1 MB.

With that done, I put all the limit information into a custom PSObject and output it for your viewing pleasure. The Get-MailboxStorageLimit script has a little bit more data in its output because I wanted to build a nice report. But it also turns out that the StorageLimitStatus property within Get-MailboxStatistics doesn’t update immediately (it checks it on a schedule). So I was running into a situation where if someone’s mailbox was in a warning state and I increased their storage limit, it would report back that they were still in a warning state. I did not want that to confuse anyone, so I removed it from the Set-MailboxStorageLimit script. In the following image, you can see the differences in output between the two commands in addition to the varying byte counts and the out-of-date StorageLimitStatus.

Image of command output

Finally, the end section is there simply to look pretty because I really do not have anything for it to do.

If you decide to use these scripts, I would recommend performing a detailed analysis of your current design, determining the most appropriate storage levels for your organization, and then modifying my scripts accordingly.

~Jeremy

Thank-you, Jeremy, for once again sharing your knowledge and time. The complete scripts are posted in Script Center Repository.

Join me tomorrow when I will introduce the sponsors for the 2012 Scripting Games.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Windows PowerShell for the Busy Admin

$
0
0
Summary: Microsoft Scripting Guy Ed Wilson begins a series of five live meetings on March 12, 2012 Well it is official, next week, beginning on Monday March 12, 2012 I will commence a new series of Learn Windows PowerShell live meetings. This series...(read more)

Support Our 2012 Sponsor: Don Jones and Concentrated Technology

$
0
0

Don Jones is one of the world's most-recognized Windows PowerShell experts, and through his company, Concentrated Technology, he offers Windows PowerShell training, books, videos, and more.

Don is the author of Windows PowerShell in a Month of Lunches, the self-published Windows PowerShell Scripting and Toolmaking, and a companion DVD with 100 narrated, high-definition video demos. All his publications are available directly from him.

Don also offers scheduled public Windows PowerShell classes through Interface Technical Training. Don's friendly, real-world, practical approach has helped thousands of Windows administrators just like you become immediately effective with Windows PowerShell—with no programming or scripting skills required! Make this the Year You Learned the Shell—and learn from the best!

 2012 Scripting Games badge

Back to All Sponsors page

Support Our 2012 Sponsor: Interface

$
0
0

Welcome to world-class Windows PowerShell training at Interface Technical Training.

We teach. You do.

Following are the classes that we offer:

  • PS300 PowerShell for Administrators
  • PS350AD PowerShell for Active Directory
  • PS350EX PowerShell for Exchange Server
  • PS350WMIPowerShell for Windows Management Instrumentation (WMI)
  • PS400 Windows PowerShell Scripting and Tool Making
  • DJPSV3 Windows PowerShell v2 "Booster" and v3 "Sneak Peek" with Don Jones
  • DJPS300 Don Jones’ Exclusive Accelerated PowerShell Masterclass for Administrators

Our quality training is available in three ways:

  • Classroom training: Attend live classes in person at Interface in Phoenix.
  • RemoteLive™ training: Attend the same live classes online, from anywhere on earth.
  • Video training: Access the same real-world classroom content on the web with high-definition streaming video.

Learn from our expert instructors:

  • Don Jones: Microsoft PowerShell MVP
    Author of PowerShell in a Month of Lunches
  • Jason Helmick: Director of PowerShell Technologies
    Author of POSH300, POSH400, and POSH450 courseware and scenario-based labs
  • Mike Pfeiffer: Microsoft Exchange MVP, MCM
    Author of Microsoft Exchange 2010 PowerShell Cookbook

 2012 Scripting Games badge

Back to All Sponsors page

Support Our 2012 Sponsor: Manning Publications

$
0
0

Manning is a publisher of computer books for professionals. We published our first book in 1993, and ever since, we have been learning from our successes, and even more from our mistakes. Every new book teaches us something to help us improve. How to choose the topics; how to find the right authors; how to help authors write their manuscripts; how to ensure that the content is valuable and easy to learn; how to get the word out about the book. We publish standalone titles and books in series, including: Hello!, In Action, In Practice, In Depth, and In a Month of Lunches. Readers can access our books before they are finished through the Manning Early Access Program, and we make our books available through Safari and iBooks. Print copies, wherever they are bought, come with free electronic versions in PDF, ePub, and Kindle formats, which are downloadable from the Manning site.

 2012 Scripting Games badge

Back to All Sponsors page

Support Our 2012 Sponsor: SAPIEN Technologies

$
0
0

SAPIEN Technologies is out to make Windows administrative tasks—whether you're scripting, working with databases, working with XML, or related technologies—simpler. We offer more than just software: We give you more than 20 years of experience, powerful software applications, authoritative books, supportive communities, and real-world training. It's everything you need to learn new technologies, advance your skills in existing technologies, and work more effectively and more efficiently.

PrimalScript

More than just a simple script editor, PrimalScript is the only Integrated Scripting Environment (ISE) that supports all of the file types and languages you need to work with every day. PrimalScript is also the only development environment that supports 32-bit and 64-bit execution and powerful integrated consoles, in addition to debugging and packaging, in one single, streamlined product. PrimalScript supports over 50 languages and file types, ranging from Windows PowerShell, VBScript, and JScript to ASP and ASP.NET. Additionally, dedicated tools for XML and SQL make complicated data processing a snap. 

PrimalForms

PrimalForms is THE next generation Windows PowerShell development environment. It offers a powerful, visual, and easy-to-use scripting environment whether you work with command lines or GUI-based scripts. It comes with both 32-bit and 64-bit debuggers, and it also offers both 32-bit and 64-bit consoles that you can switch between with one click without restarting or losing your context. In addition, you can create visual Windows PowerShell-based apps and distribute them to your users. No other tool gives you such easy and powerful drag-and-drop Windows PowerShell forms creation, one step access to WMI and SQL integration, and clean and simple script packaging and distribution. PrimalForms—the ultimate tool for all Windows PowerShell users.

 2012 Scripting Games badge

Back to All Sponsors page

Support the Amazing Sponsors of the 2012 Scripting Games!

$
0
0

 2012 Scripting Games badge

As you can imagine, a free event such as the 2012 Scripting Games would not be nearly as fun without the hope of winning cool prizes. And we'd like to wholeheartedly thank our sponsors this year.

Please support our sponsors. They offer great products related to scripting, and we'd be honored to have you visit their websites to see what they have to offer.

Sponsors of the 2012 Scripting Games

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>