Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Strategy for Handling 2013 Scripting Games Events

$
0
0

Summary: Microsoft PowerShell enthusiast, Jeff Wouters, talks about his experience with the 2013 Winter Scripting Games warm-up events.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a special guest blogger, Microsoft Windows PowerShell enthusiast, Jeff Wouters. Here is a little bit about Jeff…

Photo of Jeff Wouters

Jeff Wouters (B ICT, MCITP, MCSA, MCSE) is a freelance technical consultant from the Netherlands with a main focus on high availability and automation. In Microsoft and Citrix products, he uses technologies such as virtualization, redundancy, clustering, and replication. He also has a great passion for Windows PowerShell and is a founding member of the Dutch PowerShell User Group in 2012.

Jeff has been a speaker at IT events such as E2E Virtualization Conference (formerly known as PubForum), BriForum Chicago, and NGN (Dutch IT community). He speaks and blogs mainly about Windows PowerShell and virtualization, but every now and then something else slips in that piques his interest. Jeff is also a contributing author for a book project where 30+ authors from all over the world are working together to create a Windows PowerShell deep-dive book that will be published in 2013.

Jeff’s contact information:

This year, I’m competing in the Windows PowerShell Scripting Games that will be launched at the Windows PowerShell Summit in April. As a teaser, and to test the new system, it was decided to do a little warm-up event.

Note   The 2013 Winter Scripting Games warm-up events are over. The Scripting Wife wrote about her experience with the warm-up events in The Scripting Wife Talks About the First Warm-Up Event. The announcement for the 2013 Scripting Games (main event) will take place at the Windows PowerShell Summit in April. Stay tuned for more information.

I found that the first exercise of the 2013 Winter Scripting Games warm-ups is something that I was able to use in a few of the Windows PowerShell trainings I’ve been giving. It teaches people to break large scripting projects down into eatable pieces and investigate them. When you do this in a group, you’ll end up having some great discussions about why you are doing it the way you are doing when your colleague is doing it another way. So what I’m providing in this post is my solution, not THE solution.

Last year I participated in the Beginner class. I love challenging myself, so I added a little something to the exercise: To deliver every exercise within one hour after starting to script. I was very happy to actually succeed in that task, although at one event I was cutting it a bit close because I was trying to do something with the wrong cmdlet (I needed to use Get-WinEvent instead of Get-EventLog or Get-Event). But that’s a whole other discussion…

I don’t recommend trying to deliver your scripts within an hour. As I experienced last year, this will greatly diminish your learning experience, which is exactly opposite of the goal for the Scripting Games.

So let’s get back on topic. This year I’ve decided that I want to participate in the Advanced class—mostly because I’ve learned a great deal in the last year and I still want to challenge myself.

I hope that you’ll find this post useful in your coming scripting endeavors.

For the purpose of this post, I’ve split the exercise into separate bullets—we’ll cover them one at a time. I’ve numbered the paragraphs in this post the same as each bullet in the exercise so that you can easily find your way around this rather large post.

  1. You have been asked to create a Windows PowerShell advanced function named Get-DiskSizeInfo.
  2. It must accept one or more computer names as a parameter.
  3. It must use WMI or CIM to query each computer.
  4. For each computer, it must display the percentage of free space, drive letter, total size in gigabytes, and free space in gigabytes.
  5. The script must not display error messages.
  6. If a specified computer cannot be contacted, the function must log the computer name to ‘C:\Errors.txt’.
  7. Optional: Display verbose output showing the name of the computer being contacted.

Note  Although I added a Help function when I wrote the script, I’ve not included it in this post because it would make it even bigger—and it’s big enough as it is, right?

1.    The advanced function

The function has a few requirements. First, it has to be named Get-DiskSizeInfo. Secondly, it needs to be an advanced function.

Many of the students in my Windows PowerShell classes and workshops ask me how they can convert a function into an advanced function. It’s actually pretty easy, just add [CmdletBinding()] to it at the top like so:

function Get-DiskSizeInfo
{
  [CmdletBinding()]
  Param ()
}

See how easy it is? You don’t need to be a rocket scientist to write some Windows PowerShell commands.

2.    A parameter

The second is that the function needs to accept one or more computer names as input via a parameter. You could define a bunch of parameters such as ComputerName1, ComputerName2, ComputerName3, but that’s just plain crazy.

If you were to create a single parameter and make it an array instead of a string, it would fit our needs just fine:

function Get-DiskSizeInfo

{

  [CmdletBinding()]

  Param (

      [Parameter(Mandatory=$false)][array]$ComputerName=$Env:ComputerName

  )

}

Note that I’ve made the parameter NOT mandatory because I’ve given a default value (the local computer name).  If I were to make it mandatory, giving it a default value would be useless because it would prompt me to provide values for the ComputerName parameter (because it’s mandatory).

But this is a rather basic parameter. In fact, I would want to do more with it such as providing it aliases and allowing input from the pipeline. So let’s add some of that:

function Get-DiskSizeInfo

{

  [CmdletBinding()]

  Param (

    [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

    [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

    [array]$ComputerName=$Env:ComputerName

  )

}

As you can see, I also want my function to be able to handle pipeline input. Therefore, I’ll be using a Begin-Process-End construction: 

function Get-DiskSizeInfo

{

  [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param (

      [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin { }

  Process { }

  End { }

}

3.    The command

In this case, I’ve chosen to use WMI because not all servers in my production environment support CIM. I could have used the COM protocol combined with the CIM cmdlets, but I have found simply using the WMI cmdlets to be easier. It wasn’t a requirement to NOT use WMI, so I am still working within the boundaries that are set by the exercise.

Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3"

But as usual, you’ll get too much information, and you’ll only want the properties that are required. Because there are some additional requirements, such as showing the output value of the total size in GB, I need to do some formatting:

Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

Note  I’ve seen some people use Label and Expression, whereas others use Name and Expression. Both work just fine, so you can use whatever makes you happy.

Now include this code into the function:

function Get-DiskSizeInfo

{

  [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin { }

  Process

  {

    Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

  }

  End { }

}

4.    Display the information for each computer

Displaying the information is actually the easy part. Simply use a ForEach loop, and add a computer name parameter to the Get-WMIObject command: 

function Get-DiskSizeInfo

{ [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin { }

  Process

  {

    Foreach ($Target in $ComputerName)

    {

      Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorVariable $Errors -ErrorAction SilentlyContinue | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

    }

  }

  End { }

}

5.    No errors displayed

Handling errors can be a little tricky because there are two types of errors: terminating and non-terminating.

Terminating errors will actually terminate your script. So if such an error occurs, it’s the end of the script. If you’re executing the command for multiple objects, you wouldn’t want the script to be terminated half way through, right? So, how can you catch those errors?

Well, that’s it actually…you need to “catch” them with Try-Catch—and to not show them, you need to redirect or pipe them to Null.

In this case, I’ll only be catching the exceptions: 

function Get-DiskSizeInfo

{ [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin { }

  Process

  {

    Try

    {

      Foreach ($Target in $ComputerName)

      {

        Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorVariable $Errors -ErrorAction SilentlyContinue | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

      }

    }

    catch [System.Exception]

    {

      $Error | Out-Null

    }

    Finally { }

  }

  End { }

}

Note  For more information about how you can use Try/Catch/Finally, take a look at the Hey! Scripting Guy Blog, How Can I Use Try/Catch/Finally in Windows PowerShell? 

The command itself can give errors. For example if a computer can’t be contacted, it will return that the RPC server is unavailable. You can solve this by adding the ErrorAction parameter to the Get-WMIObject cmdlet with a SilentlyContinue value:

function Get-DiskSizeInfo

{ [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin { }

  Process

  {

    Try

    {

      Foreach ($Target in $ComputerName)

      {

        Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorAction SilentlyContinue | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

      }

    }

    catch [System.Exception]

    {

      $_ | Out-Null

      $Error | Out-Null

    }

    Finally { }

  }

  End { }

}

The reason I’m catching System.Exception here is that this is the base exception class. All other exception classes are derived from this one.

6.    Error logging

No errors are shown, but how do we know if a system could not be contacted? It isn’t shown to the screen—we’ve just made sure that won’t happen. Also, one of the requirements was to write the name of that computer to an error log (C:\Errors.txt) when it can’t be contacted.

First I always like to define the file or even the path of the error log. We can do that at Begin { }:

Begin

{

  $ErrorLogPath = "C:\Errors.txt"

}

If you have Windows PowerShell 3.0, you can use the Out-File cmdlet with the Append parameter. This parameter was introduced in Windows PowerShell 3.0. We need an error written to the log at each loop.

So how do we get those errors? We’ve just made it so that no errors are shown, so where are they?

Windows PowerShell comes with a bunch of error variables. One of those is $?. This variable gives you a $true or $false depending on if the last command completed successfully. So if the variable is false, we know that an error has occurred, right? An error means that the device could not be contacted, no matter what the reason. And we don’t care about the reason because that wasn’t one of the requirements. We only want to log that the device could not be contacted.

function Get-DiskSizeInfo

{ [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

  Begin

  {

    $ErrorLogPath = "C:\Errors.txt"

  }

  Process

  {

    Try

    {

      Foreach ($Target in $ComputerName)

      {

        Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorAction SilentlyContinue | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

        if (!$?) {"Device $Target could not be contacted" | Out-File $ErrorLogPath -Append}

      }

    }

    catch [System.Exception]

    {

      $_ | Out-Null

      $Error | Out-Null

    }

    Finally { }

  }

  End { }

}

7.    Display system name based on the Verbose parameter

This step took some searching because I had not done this before. How do we know if the Verbose parameter has been used?

There probably are some very creative ways of doing this, but do you know that you can use the $PSCmdlet variable? You can use this to check the command you’ve invoked for the presence of a parameter.

So if the parameter is present, we want to do something; and if it’s not, we want to do something else.

function Get-DiskSizeInfo

{ [CmdletBinding(SupportsShouldProcess=$true,PositionalBinding=$false,ConfirmImpact='Low')]

  Param ( [Parameter(Mandatory=$false,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,ValueFromRemainingArguments=$false,Position=0)]

      [ValidateNotNull()][ValidateNotNullOrEmpty()][Alias("Name","Computer")]

      [array]$ComputerName=$Env:ComputerName

  )

Begin

  {

    $ErrorLogPath = "C:\Errors.txt"

  }

  Process

  {

    Try

    {

      If ($PSCmdlet.MyInvocation.BoundParameters["Verbose"].IsPresent)

      {

        Foreach ($Target in $ComputerName)

        {

          Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorAction SilentlyContinue | Select-Object SystemName,@{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

          if (!$?) {"Device $Target could not be contacted" | Out-File $ErrorLogPath -Append}

        }

      }

      else

      {

        Foreach ($Target in $ComputerName)

        {

          Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" -ComputerName $Target -ErrorAction SilentlyContinue | Select-Object @{Label="Drive";Expression={$_.DeviceID}},@{Label="FreeSpace(GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},@{Label="Size(GB)";Expression={"{0:N1}" -f($_.Size/1GB)}},@{Label=”PercentFree”;Expression={"{0:N0}" -f(($_.freespace * 100) / $_.Size)}}

          if (!$?) {"Device $Target could not be contacted" | Out-File $ErrorLogPath -Append}

        }

      }

    }

    catch [System.Exception]

    {

      $Error | Out-Null

    }

    finally {}

  }

  End

  {

  }

}

My conclusion is also my advice: Break down the exercise into eatable pieces and cover them one at a time. This will make your scripting life and learning experience a whole lot more effective and easier. Trust me on this one. Also take time to properly investigate each part, which will greatly improve your learning experience. You are going to encounter things in your investigation that you didn’t know. But be aware that those investigation will not take you too far away from your goal. Simply do as I do: Make a note of it and look at it sometime in the future…

~Jeff

Jeff, thank you so very much for writing about your experiences in the 2013 Scripting Games warm-up exercises.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


PowerTip: Find PowerShell Noun Distribution

$
0
0

Summary: Learn how to see which nouns in Windows PowerShell are used most.

Hey, Scripting Guy! Question How can I find which Windows PowerShell cmdlet nouns are used most?

Hey, Scripting Guy! Answer Import all the modules, then use the Get-Command cmdlet to retrieve all cmdlet information, and group by noun. You can also use the following commands to sort the results by count.

Note   Gmo is an alias for Get-Moduleipmo is an alias for Import-Modulegcm is an alias for Get-Commandgroup is an alias for Group-Objectsort is an alias for Sort-Object, and more is a Windows PowerShell function.)

gmo -li | ipmo

gcm | group noun -NoElement | sort count -Descending | more

 

Security Series: Using PowerShell to Protect Your Private Cloud Infrastructure

$
0
0

Summary: Microsoft senior technical writer, Yuri Diogenes, and knowledge engineer, Tom Shinder, talk about using Windows PowerShell to protect a Windows Server 2012-based cloud infrastructure.

Microsoft Scripting Guy, Ed Wilson, is here. Today we start a three-part series by Yuri Diogenes and Tom Shinder. The authors describe examples about how you can leverage Windows PowerShell to automate tasks to protect a Windows Server 2012-based cloud infrastructure.

~Yuri Diogenes, senior technical writer, SCD iX Solutions Group
Twitter: @YuriDiogenes

~Tom Shinder, knowledge engineer, SCD iX Solutions Group
Twitter: @TomShinder

Image of book

This is going to be some really cool stuff. Take it away, guys…

The NIST definition of cloud computing, which applies to all cloud deployment models (public cloud, private cloud, hybrid cloud, and community cloud) requires that a cloud solution enable the following essential characteristics.

Note  Source for this information: Overview of Private Cloud Architecture

On-demand self-service   The consumer of the cloud service should be able to obtain cloud services (such as compute, memory, network, and storage resources) by using a self-service mechanism (such as a web portal) so that acquiring the service does not require human intervention by the Cloud Service Provider (CSP).

Broad network access   The cloud solution should be accessible from almost anywhere (when required) and also be accessible from multiple form factors, such as smart phones, tablet computers, laptops, desktops, and any other form factor existing currently or in the future.

Resource pooling   The cloud solution should host a pool of shared resources that are provided to consumers of the cloud service. Resources such as compute, memory, network, and storage are allocated to consumers of the service from a shared pool. Resources are abstracted from their actual location, and consumers are unaware of the location of these resources.

Rapid elasticity   The cloud solution should provide rapid provisioning and release of resources as demand for the cloud service increases and decreases. This should be automatic and without the need of human intervention. In addition, the consumer of the cloud service should have the perception that there is an unlimited resource pool so that the service is able to meet demands for virtually any use case scenario.

Metered services   Sometimes referred to as the “pay-as-you-go” model, the cloud solution must make it possible to charge the consumer of the cloud service an amount that is based on actual use of cloud resources. Resource usage is monitored, reported, and controlled by the CSP and by service policy, which delivers billing transparency to both the CSP and the consumer of the service.

There is room for automation in all of these essential characteristics, and Windows PowerShell can be leveraged to fill this gap. However, beyond these characteristics, there is also a series of security concerns that private cloud tenants have concerning how a private cloud operates and how their data can be secure.

We advise that you read the following posts so that you’ll have a better understanding of the issues:

Network protection with Windows PowerShell

Of all the components of the cloud infrastructure that can be attacked, the most significant one is the network. Data of all types moves through the network. Virtual machines also could be in transit through the network when Hyper-V Replica-based replication is performed. Modern datacenters that host the compute and storage components of the cloud infrastructure depend on network connectivity to connect the tiers. These are only a few examples of how the network exposes vital corporate information to attack when the information is in flight.

There are a number of methods you can use to protect information from network-based attacks in your cloud infrastructure. Let’s take a look at a few options that are available to you by the platform capabilities in Windows Server 2012.

Scenario 1: Protecting against eavesdropping attack

As described in Leveraging Windows Server 2012 Capabilities to Address Private Cloud Security Concerns – Part 2, you need to be concerned about protecting information that moves through the cloud infrastructure network. This is especially important when you deploy a private cloud infrastructure and you host the compute component separately from the storage component. In this design, the private cloud infrastructure contains a compute Hyper-V failover cluster and a storage cluster. The virtual machines run on the Hyper-V cluster and the virtual machine disk and configuration files are stored in the storage cluster. The virtual machine disk and configuration files are exposed to the compute cluster as file-based storage.

In this scenario, all information contained in the virtual machines moves over the network between the storage and compute clusters. If an attacker gains access to the network that provides the file-based storage to the compute cluster, that attacker can potentially have access to all information contained in all virtual machines. For this reason, it’s critical that the information on this storage network be encrypted.

There are several options available when it comes to enabling network encryption. One is IPsec. However, not all scenarios lend themselves to the overhead and complexity of IPsec protection. For example, a file server might contains dozens or even hundreds of shares. However, there might be only three or four shares that contain information that requires network encryption. All the other shares can be accessed and transmitted over the network in the clear. IPsec doesn’t support this scenario. To get this level of granularity, you need to use something other than IPsec.

The good news is that “something” is Windows Server 2012 SMB 3.0 encryption. SMB 3.0 is the file sharing protocol used by Windows Server 2012, and it includes functionality that wasn’t available in previous versions. One of these features is transparent SMB encryption, which enables you to enforce network encryption on a per-folder or per-server basis.

To see how SMB encryption works, let’s take a look at the network topology described in the following image.

Image of topology

Scenario definition: Contoso has a cloud infrastructure, and one of the tenants (in this example, the finance department) has a folder that contains PII data. This data is already encrypted when at rest, but they also require that the content is encrypted while in transit.

Scenario constraint: The finance department tenant has one workstation running Windows 7. This workstation won’t be able to access this folder because Windows 7 does not support SMB encryption. This is by design—only Windows Server 2012 and Windows 8 support SMB 3.0 encryption.

The following Windows PowerShell command can be used to enable encryption on a specific file share on the Windows Server 2012 file server:

Image of command output

You can use the New-SmbShare cmdlet to create the share if it’s not already in place. Notice that the EncryptionData $true is the parameter you use to set the encryption attribute for this folder. If the share already exists, you can use the following command:

Set-SmbShare –Name <sharedfoldername> -EncryptData $true

You can discover the current encryption state of a share by using the Get-SMBShare command:

Image of command output

Additional tips

If you want to enable network encryption for all file shares on a server, you can use the following command:

Set-SmbServerConfiguration –EncryptData $true

Keep in mind that only Windows Server 2012 and Windows 8 can access shares that require network encryption. You might want to make the shares that require network encryption available to down-level operating systems. In this scenario, unencrypted network access is available to clients running Windows 7 (and earlier). To enable this type of configuration, you can use the command:

Set-SmbServerConfiguration –RejectUnencryptedAccess $false

This example focused on a file server for users in the finance department to simplify the scenario for demonstration purposes. When thinking about how this feature is used in a cloud infrastructure, the best use is when you provide a compute failover cluster access to file-share based storage over SMB 3.0. This design pattern enables you to separately scale compute and storage, and it provides performance and security similar to or better than that found in a traditional iSCSI or Fibre Channel SAN environment.

In this first blog of our three part series, we defined the essential characteristics of cloud computing, briefly discussed some cloud security challenges, and started exploring network protection by using platform capabilities in Windows Server 2012. The next blog in this series will discuss protection against rogue DHCP servers. See you next time!

~Yuri
~Tom

Thanks, Yuri and Tom. I cannot wait for your next blog, which will go live on Friday April 12, 2013. Make a note of it—it is a posting that you will not want to miss.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Find Non-Inherited Access to a Folder

$
0
0

Summary: Learn how to use Windows PowerShell to find non-inherited access rights to a folder.

Hey, Scripting Guy! Question How can I find non-inherited access rights to a folder by using Windows PowerShell?

Hey, Scripting Guy! Answer Use the Get-Item cmdlet to select the folder, and pipe it to the Get-ACL cmdlet. Choose the Access property, and filter the results based on an inheritance flag equal to none. This technique is shown here for the C:\fso folder (? Is an alias for the Where-Object).

(Get-Item c:\fso | get-acl).Access | ? inheritanceflags -eq 'none'

 

Weekend Scripter: Run C# Code from Within PowerShell

$
0
0

Summary: Learn how to execute C# programs from source without a compiled binary by using Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Today guest blogger, Ingo Karstein, is back with us to share his knowledge. You can also read previous guest blogs by Ingo. Take it away Ingo…

Some time ago, I created a script called PS2EXE that creates EXE files out of Windows PowerShell script files. It is posted on the Hey, Scripting Guy! Blog: Learn About Two CodePlex Projects: PS2EXE and RoboPowerCopy.

Now I have created another script, rather the opposite of PS2EXE: C#Script. This script is able to execute C# programs from source code without a compiled binary.

I’ve done this because I have several C# tools that I always use for my daily business. Some of them are not easily convertible to Windows PowerShell. But I like the fact that script files are always readable because you only need the source code, no compiler. There is no need for binaries with separate source projects somewhere on the hard disk, so I decided to create a script with the purpose of running C# code inside Windows PowerShell.

The idea is simple:

  1. Take a C# program file and compile it into the memory.
  2. Search for the Main method and call them using .NET reflection.
  3. Add some basic .NET console support to write output from the C# program to the Windows PowerShell environment.

The C# program will be executed in a real .NET thread that is created in the Windows PowerShell script by using a helper class that is compiled in memory at runtime too. This helper class provides some synchronous .NET events that can be subscribed in Windows PowerShell to handle the console output.

Note   You can download C#Script from the Microsoft TechNet Gallery: C#Script: Execute source code C# programs from PowerShell. You should be aware of the following limitations:

  • This project is in the alpha state! There will be errors in it. So please be careful, especially in a production environment.
  • Console input not implemented. Therefore, we need a custom class derived from System.IO.TextReader.
  • There is no resource file support! It is just plain C#.

To demonstrate the script, I created two C# demo projects: “Test” and “TestWin.” The first one is a console application, the second is a Windows Forms application.

The following screenshot shows my “TestWin” demo project. My csscript.ps1 file is in the folder “C:\source2\csscript,” and “testwin” is in the subfolder. (The Windows 8 operating system is in German, but you get the point, I’m sure.) 

Image of menu

Let’s have a look into “TestWin.” The following screenshot is from the Visual Studio 2012 project.

Image of menu

Of course it can be run in Visual Studio 2012 or as standalone .NET assembly (EXE file). The list box is empty. It would show the program's arguments if there were any.

Image of dialog box

At the beginning of the Program.cs file, there is an XML configuration section for C#Script:

//<csscript>

//  <nodebug/>

//  <references>

//    <reference>System</reference>

//    <reference>System.Core</reference>

//    <reference>System.Data</reference>

//    <reference>System.Data.DataSetExtensions</reference>

//    <reference>System.Xml</reference>

//    <reference>System.Xml.Linq</reference>

//    <reference>System.Windows.Forms</reference>

//    <reference>System.Drawing</reference>

//  </references>

//  <mode>winexe</mode>

//  <files>

//      <file>Form1.cs</file>

//      <file>Form1.Designer.cs</file>

////      <file>Test</file>

//  </files>

//</csscript>

This rules the compilation of the program when using C#Script. Here, you specify the .NET assembly references, the execution mode, and the source files. By using four slash characters, the line will be ignored. In the “files” section, you specify all necessary C# files if there are more than one. The additional files do not  need XML configuration.

In my demo, the project needs three C# files to run: Program.cs, Form1.cs, and Form1.Designer.cs. The configuration XML is stored only at the beginning of Program.cs.” By using <debug/>, it’s possible to debug the C# program file. I will show that later.

Now let’s go to the Windows PowerShell command line and use C#Script:

Image of command

At the command line, I type:

.\csscript.ps1 .\testwin\testwin\Program.cs "Greetings" "from" "germany" "!"

That’s it.

Let’s have a look at the console application demo.

Image of menu

At the Windows PowerShell command line, I type:

.\csscript.ps1 .\test\test\Program.cs "Greetings" "from" "germany" "!"

Here I use the <debug/> configuration to be able to debug the program. This gives me the Debugger Attach dialog from Visual Studio 2012 when I run the previous command-line statement:

Image of command output

You will automatically get the source file of the C#Script internal helper class with hard coded breakpoints:

Image of command output

Here you can see how it works inside: Internally it creates a thread and executes the original program by reflection. The Main method of the C# program is given in Method parameter, and the command-line arguments are in the prms parameter.

The next hard coded breakpoint is specified in the Program.cs file. The file will have a new name. (Here the name is n_dspgoj.0.cs.)

Image of command output

This is the output in the console:

Image of command output>

In the C#Script package, I’ve included a file named “csscript.bat” that helps you execute csscript from the traditional Windows shell:   

Image of script

You can use it like this:

Image of command output

I’ve tested this C#Script with one of my favorite tools: the SharePoint Feature Administration and Clean Up Tool (it’s a Windows Forms application).

1. I downloaded the code from CodePlex: SharePoint Feature Administration and Clean Up Tool
2. Unzip to a folder.
3. Create a batch file “run.bat.”

@echo off

call csscript.bat "FeatureAdmin2013-VisualStudio2012\Program.cs"

 4. Copy “csscript.ps1” and “csscript.bat” into the folder.

 Image of folder

5. Modify the file “Program.cs” to contain the config XML structure.

 Image of script

Note   “requiredframework” and “requiredplatform” are set because SharePoint 2013 needs .NET Framework 4.0 and 64-bit processes.

6. Execute the program with “run.bat” without compilation in VS2012.

Image of command output

For this program, I’ll not need a compiled EXE anymore! Now I’d like to get your response! If you found any errors, please report them on the TechNet Gallery page: C#Script: Execute source code C# programs from PowerShell. Please feel free to modify C#Script and send me your changes.

~Ingo

Ingo, thank you for sharing this with us today. I love it.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

PowerTip: Use PowerShell to Find Calling Assembly

$
0
0

Summary: Learn how to use Windows PowerShell to find the calling assembly.

Hey, Scripting Guy! Question How can I find the name of a calling assembly from within Windows PowerShell?

Hey, Scripting Guy! Answer Use the GetCallingAssembly static method from the [system.Reflection.Assembly] class:

[system.reflection.assembly]::GetCallingAssembly()

Weekend Scripter: Managing Dell AppAssure with Windows PowerShell

$
0
0

Summary: Guest blogger, Mike Robbins, talks about using Windows PowerShell to manage Dell AppAssure.

Microsoft Scripting Guy, Ed Wilson, is here. Guest blogger, Mike Robbins, returns today to share his experience. You can also read previous blogs by Mike.

Photo of Mike Robbins

Mike Robbins is a senior systems engineer with almost 20 years of professional experience as an IT pro who currently works for a healthcare company located in Meridian, MS. During his career, Mike has provided enterprise computing solutions for educational, financial, healthcare, and manufacturing customers. He’s a Windows PowerShell enthusiast who uses Windows PowerShell on a daily basis to administer Windows Server, Hyper-V, SQL Server, Exchange Server, SharePoint, Active Directory, Terminal Services, EqualLogic Storage Area Networks, AppAssure, and Backup Exec. Mike is an author of a chapter in the book PowerShell Deep Dives, he has presented sessions at PowerShell Saturday 003 in Atlanta, for the Mississippi PowerShell User Group, and for the Florida PowerShell User Group. Mike is also one of the cofounders of the Mississippi PowerShell User Group.

Blog: Mike F Robbins Computing Solutions
Twitter: @mikefrobbins
Mississippi PowerShell User Group

Here’s Mike…

For those of you not familiar with Dell AppAssure, it’s a backup, replication, and recovery solution. More information about this product can be found on the AppAssure website.

It’s no secret that I’m a big fan of Windows PowerShell so when a third-party vendor, such as Dell, adds PowerShell support to their products, I also become very interested in those products. (I’m not affiliated with Microsoft, Dell, or AppAssure in any way other than being a customer.)

AppAssure added Windows PowerShell support via a Windows PowerShell module in version 5.3.1 of their product. This module named AppAssurePowerShellModule includes a total of 31 cmdlets, which are  listed in the following image:

Image of command output

Usually most vendors are slow to support the latest and greatest version of Windows PowerShell, but not AppAssure. According to AppAssure’s support documentation, How to Import the AppAssure 5 PowerShell Module, they prefer Windows PowerShell 3.0, although Windows PowerShell 2.0 is also supported. The examples found in this blog use Windows PowerShell 3.0 syntax, which also means that the AppAssure PowerShell module doesn’t have to be explicitly imported before using the cmdlets. If you were running Windows PowerShell 2.0, you would need to import the AppAssure PowerShell module by using the following command prior to attempting to use any of the cmdlets:

In the previous example, Name is a positional parameter which can be omitted if the first item specified after Import-Module is the name of the Windows PowerShell module.

From what I’ve seen in the industry, the average IT pro isn’t using Windows PowerShell yet. Hopefully that will soon change with more third-party vendors adding Windows PowerShell integration to their products and with cmdlets (such as those in the AppAssure PowerShell module) being so easy to use. Most of the cmdlets in the AppAssure PowerShell module require only opening Windows PowerShell on the AppAssure core server and running the cmdlet with no parameters or with very few mandatory parameters being required.

The Get-ProtectedServers cmdlet is a perfect example of this. It returns all of the servers that are protected by the AppAssure core server that you’re currently logged in to without specifying anything other than the cmdlet name. The information provided includes the server names, status, AppAssure agent version, and a few other properties:

Image of command output

Wasn’t that much easier and more efficient than using the GUI to retrieve this information? Grouping the protected servers to determine how many are listed for each version, or returning a list that is grouped by version (including the names of the protected servers for each version) are also easy tasks when using Windows PowerShell:

Image of command output

I’ve found something that it is difficult to determine in the AppAssure GUI. When I start replicating protected servers to another AppAssure core server, and I choose to initially seed the transfer via SneakerNet, how do I know when the seeding of the protected server to the portable media device is complete?

This is something that’s very important. Disconnecting it from the source AppAssure core server and shipping it to the site where the destination AppAssure core server resides, before the seeding process has completed, could cause the protected servers to not be replicated, not have a replicated base image, or have an orphaned chain of recovery points. Determining the seed drive progress for each protected server that you’ve chosen to replicate to a remote AppAssure core server is also easy to determine with Windows PowerShell as shown in the following example:

Image of command output

I’m not really a big fan of logging on to a server via a remote desktop to manage it. In the following scenario, I use the New-PSSession cmdlet to create a persistent connection to three AppAssure core servers. Two of the servers are in different Active Directory forests than the Windows 8 workstation where I am running the commands. Each of these AppAssure core servers requires different credentials than those I’m currently using on my workstation and to run Windows PowerShell. They also each require different credentials than the others:

Image of command output

According to the previously referenced AppAssure article, How to Import the AppAssure 5 PowerShell Module, the module shouldn’t be imported on a non-AppAssure core server. This is why I don’t have the AppAssure Windows PowerShell module installed on my local workstation. Using the Windows PowerShell Invoke-Command cmdlet allows us to remotely manage multiple AppAssure core servers while staying within the recommended supported configuration for using their Windows PowerShell module.

Now I can use a single Windows PowerShell command to check the status of all the protected servers in the three datacenters, which have an AppAssure core server. I want to know if any of the servers that are supposed to be protected by AppAssure aren’t being protected:

Image of command output

Based on the results in the previous example, I have one server out of all the protected servers in all three datacenters that has a status of unreachable. This doesn’t seem too impressive until you factor in that the three datacenters have a total of 53 protected servers:

Image of command output

With that many servers, it’s easy to see how efficient it is to use this Windows PowerShell command rather than using the AppAssure GUI (web) interface. It took a total of 1 minute and 25 seconds to query the three AppAssure core servers for the status on all 53 protected servers:

Image of command output

Two of the AppAssure core servers and twenty-two of the protected servers in the previous example reside at remote datacenters that are connected via a VPN across the Internet (not on the LAN).

We’ve only looked at one of the cmdlets provided in the AppAssure PowerShell module in this blog, but there’s much more that can be done with Windows PowerShell when it comes to managing AppAssure. Want to protect a new server? That’s what the Start-Protect cmdlet is for. Want to pause the AppAssure replication between AppAssure core servers during business hours and resume it after business hours due to bandwidth constraints?

That’s what the Suspend-Replication and Resume-Replication cmdlets are for. This is also where Windows PowerShell 3.0 comes in handy. You could use Windows PowerShell to set up scheduled tasks that run Windows PowerShell commands to pause and resume the replication. I wrote a blog about that very subject last month: Use PowerShell to Create a Scheduled Task that Uses PowerShell to Pause and Resume AppAssure Core Replication. It demonstrates this process, if it’s something of interest.

There is also an AppAssure event log added to the AppAssure core server during installation of the product that contains an abundance of information that can be queried by using Windows PowerShell—as you would any other event log.

Where can we find more information about Managing Dell AppAssure with Windows PowerShell? In the AppAssure 5 Technical Documentation section of the AppAssure website, there’s an AppAssure 5 PowerShell Reference Guide. You could also view the Help for the cmdlets contained in the AppAssure PowerShell module. If you’re interested in more blog posts about using Windows PowerShell to manage Dell AppAssure, see the AppAssure category on my site, Mike F Robbins Computing Solutions.

I would also like to invite you to join us on the second Tuesday of each month at 8:30 PM Central Time for the Mississippi PowerShell User Group meetings. These meetings are held online (virtual) via Microsoft Lync. Anyone from anywhere can join in and learn more about Windows PowerShell from our awesome line-up of speakers that we have scheduled throughout 2013. Each of our guest speakers for the remainder of this year is an author of at least one chapter in the book PowerShell Deep Dives.

~Mike

Awesome blog post, Mike. Thank you so much for taking the time to share your experience with us.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Use PowerShell to Create System Restore Point

$
0
0

Summary: Learn how to use Windows PowerShell to create a system restore point.

Hey, Scripting Guy! Question How can I use Windows PowerShell to create a system restore point for my computer?

Hey, Scripting Guy! Answer Open the Windows PowerShell console with Admin rights in Windows 8 or Windows 7 (use Windows PowerShell 3.0 or Windows PowerShell 2.0). Use CheckPoint-Computer and supply a description:

Checkpoint-Computer -Description adhoc


Working with XML

$
0
0

Summary: Windows PowerShell MVPs, Don Jones, Richard Siddaway, and Jeffrey Hicks share excerpts from their new book.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell in Depth
     By Don Jones, Richard Siddaway, and Jeffery Hicks

Photo of book cover

If you use Windows PowerShell, the Help, format, and type files are XML. The new “cmdlet over objects” functionality in Windows PowerShell 3.0 is based on XML. The HTML- and XML-related functionality hasn’t had any major changes in Windows PowerShell 3.0. In this excerpt from PowerShell in Depth, the authors cover several capabilities and provide some concise examples of how you might want to use them.

Windows PowerShell includes some great capabilities for working with two common forms of structured data: HTML and XML. Why is this important? Because HTML is a great way to produce professional-looking reports and you can use XML in so many places within your environment. The XML abilities in Windows PowerShell are no less amazing than its HTML abilities. We’ll cover a couple of specific use cases and the commands and techniques that help you accomplish each in this tip.

Using XML to persist data

One common use of XML is to preserve complex, hierarchical data in a simple, text-based format that’s easily transmitted across networks, copied as files, and so forth. XML’s other advantage is that it can be read by humans if required. Objects (PowerShell’s main form of command output), are one common kind of complex hierarchical data, and a pair of Windows PowerShell cmdlets can help convert objects to and from XML. This process is called serializing (converting objects to XML) and deserializing (converting XML back into objects). It’s almost exactly what happens in Windows PowerShell remoting when objects need to be transmitted over the network. Here’s a quick example: 

PS C:\> Get-Process | Export-Clixml proc_baseline.xml

This code creates a static, text-based representation of the processes currently running on the computer. The Export-Clixml cmdlet produces XML that’s specifically designed to be read back in by PowerShell.

Note   The Export verb, unlike the ConvertTo verb, combines the acts of converting the objects into another data format and writing them to a file.

PS C:\> Import-Clixml .\proc_baseline.xml | sort -property pm -Descending |

 select -first 10

 

Handles  NPM(K)    PM(K)     WS(K) VM(M)   CPU(s)     Id ProcessName

-------  ------    -----     ----- -----   ------     -- -----------

    783      77   336420    285772   819    43.69   2204 powershell

    544      41   196500    166980   652    13.41   2660 powershell

    348      24    91156     39032   600     1.28     92 wsmprovhost

    186      18    52024     35472   170     5.56    716 dwm

    329      28    24628     24844   213     0.30   2316 iexplore

    311      26    24276     22308   213     0.30    108 iexplore

    210      14    20628     26228    69     5.95   1828 WmiPrvSE

   1327      41    19608     33164   126    49.45    764 svchost

    398      15    19164     21120    56     3.95    728 svchost

    722      47    17992     23080  1394    13.45    924 svchost

The previous example demonstrates that the objects are imported from XML and placed, as objects, into the pipeline, where they can again be sorted, selected, filtered, and so forth. These deserialized objects are static, and their methods have been removed because they’re no longer “live” objects against which actions can be taken.

But because XML captures a hierarchy of object data, it’s an excellent tool for capturing complex objects. We recommend using the CliXML format as an intermediary rather than JSON.

Reading arbitrary XML data

You might also have a need to work with XML that comes from other sources. For example, the following is a brief XML file that contains information about two computers. You’ll use it in a running example:

<computers>

 <computer name='WIN8' />

 <computer name='LOCALHOST' />

</computers>

Warning   Unlike most of Windows PowerShell, XML tags are case sensitive. Using <computers> and </Computers> won’t work. Be careful if you’re retyping this example to get it exactly as shown here.

The Get-Content cmdlet can read the plain-text content of this XML file, but it will treat it as plain text. By casting the result as the special [xml] data type, you can force Windows PowerShell to parse the XML into a data structure. You’ll store the result in a variable to allow you to work with it and display the contents of the variable:

PS C:\> [xml]$xml = Get-Content .\inventory.xml

PS C:\> $xml 

computers

---------

computers 

You can see that the variable contains the top-level XML element, the <computers> tag. That top-level element has become a property. Now you can start exploring the object hierarchy: 

PS C:\> $xml.computers

computer

--------

{WIN8, LOCALHOST}

PS C:\> $xml.computers.computer[0] 

name

----

WIN8

PS C:\> $xml.computers.computer[1]

name

----

LOCALHOST

You can see how it’s easy to explore the object model from this point.

Creating XML data and files

But reading XML data is only half the fun. Windows PowerShell also lets you create XML files that can be used outside of Windows PowerShell with the ConvertTo-XML cmdlet:

PS C:\> $xml=Get-WmiObject Win32_Volume | ConvertTo-Xml

PS C:\> $xml

 

xml                                     Objects

---                                     -------

version="1.0"                           Objects

PS C:\> $xml.objects.object[0]

 

Type                                    Property

----                                    --------

System.Management.ManagementObject      {PSComputerName, __GENUS, __CLAS...

PS C:\> $xml.GetType().Name

XmlDocument

You haven’t created an XML file, but only an XML representation of the Get-WmiObject result. But assuming you have XML experience and knowledge, you could explore or modify the XML document.

When you’re ready to save the XML document to a file, invoke the Save() method. You should specify the full path and filename:

PS C:\> $xml.Save("c:\work\volume.xml")

Unlike ConvertTo-HTML, you can’t pipe to Out-File and expect an XML document. You need to take this manual step, but you can use a shortcut like the following to do it all in one step:

(Get-WmiObject Win32_Volume | ConvertTo-Xml).Save("c:\work\volume.xml")

With so much of the world’s data in XML and HTML, being able to work those formats can be handy. Windows PowerShell provides a variety of capabilities that should be able to address most common situations. Obviously, the more knowledge you have of those formats and how they’re used, the more effective you’ll be with Windows PowerShell’s ability to handle them.

Here is the code for the discount offer today at www.manning.com: scriptw1
Valid for 50% off PowerShell in Depth and SQL Server DMVs in Action
Offer valid from April 1, 2013 12:01 AM until April 2 midnight  (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Working with HTML Fragments and Files

$
0
0

Summary: Windows PowerShell MVPs, Don Jones, Richard Siddaway, and Jeffrey Hicks share excerpts from their new book.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell in Depth
     By Don Jones, Richard Siddaway, and Jeffery Hicks

Photo of book cover

There’s definitely a trick to creating reports with Windows PowerShell. Windows PowerShell isn’t at its best when it’s forced to work with text—objects are where it excels. This blog, based on Chapter 33 from PowerShell in Depth, focuses on a technique that can produce a nicely formatted HTML report, suitable for emailing to a boss or colleague.

Let’s begin this blog with an example of what we think is a poor report-generating technique. We see code like this, sadly more often than we would like. Most of the time, the IT pro doesn’t know any better, and is simply perpetuating techniques from other languages, such as VBScript. Listing 1, which we devoutly hope you will never run, is a very common approach that you’ll see less informed administrators use.

Listing 1: A poorly designed inventory report

param ($computername)

Write-Host '------- COMPUTER INFORMATION -------'

Write-Host "Computer Name: $computername"

 

$os = Get-WmiObject -Class Win32_OperatingSystem -ComputerName $computername

Write-Host "   OS Version: $($os.version)"

Write-Host "     OS Build: $($os.buildnumber)"

Write-Host " Service Pack: $($os.servicepackmajorversion)"

 

$cs = Get-WmiObject -Class Win32_ComputerSystem -ComputerName $computername

Write-Host "          RAM: $($cs.totalphysicalmemory)"

Write-Host " Manufacturer: $($cs.manufacturer)"

Write-Host "        Model: $($cd.model)"

Write-Host "   Processors: $($cs.numberofprocessors)"

 

$bios = Get-WmiObject -Class Win32_BIOS -ComputerName $computername

Write-Host "BIOS Serial: $($bios.serialnumber)"

 

Write-Host ''

Write-Host '------- DISK INFORMATION -------'

Get-WmiObject -Class Win32_LogicalDisk -Comp $computername -Filt 'drivetype=3' |

Select-Object @{n='Drive';e={$_.DeviceID}},

              @{n='Size(GB)';e={$_.Size / 1GB -as [int]}},

              @{n='FreeSpace(GB)';e={$_.freespace / 1GB -as [int]}} |

Format-Table -AutoSize

This produces a report something like the one shown here.

Image of command output

It does the job, we suppose, but Don has a saying that involves angry deities and puppies which he utters whenever he sees a script that outputs pure text like this. First of all, this script can only ever produce output on the screen because it’s using Write-Host. In most cases, if you find yourself using only Write-Host, you’re probably doing it wrong. Wouldn’t it be nice to have the option of putting this information into a file or creating an HTML page? Of course, you could achieve that by just changing all of the Write-Host commands to Write-Output—but you still wouldn’t be doing things the right way.

There are a lot of better ways that you could produce such a report and that’s what this blog is all about. First, we’d suggest building a function for each block of output that you want to produce, and having that function produce a single object that contains all of the information you need. The more you can modularize, the more you can reuse those blocks of code. Doing so would make that data available for other purposes, not only your report.

In our example of a poorly written report, the first section, Computer Information, would be implemented by some function you’d write. The Disk Information section is only sharing information from one source, so it’s actually not that bad—but all of those Write commands just have to go.

The trick to our technique lays in the fact that Windows PowerShell’s ConvertTo-HTML cmdlet can be used in two ways, which you’ll see if you examine its Help file. The first way produces a complete HTML page, and the second produces only an HTML fragment. That fragment is a table with whatever data you’ve fed the cmdlet. We’re going to produce each section of our report as a fragment, and then use the cmdlet to produce a complete HTML page that contains all of those fragments.

Getting the information

We’ll start by ensuring that we can get whatever data we need formed into an object. We’ll need one type of object for each section of our report, so if we’re sticking with Computer Information and Disk Information, that’s two objects.

Note   For brevity and clarity, we’re going to omit error handling and other niceties in this example. We would add those in a real-world environment.

Get-WmiObject by itself is capable of producing a single object that has all of the disk information we want, so we simply need to create a function to assemble the computer information. Here it is:

function Get-CSInfo {

  param($computername)

  $os = Get-WmiObject -Class Win32_OperatingSystem `

  -ComputerName $computername

 

  $cs = Get-WmiObject -Class Win32_ComputerSystem `

  -ComputerName $computername

 

  $bios = Get-WmiObject -Class Win32_BIOS `

  -ComputerName $computername

 

  $props = @{'ComputerName'=$computername

             'OS Version'=$os.version

                     'OS Build'=$os.buildnumber

                     'Service Pack'=$os.sevicepackmajorversion

                     'RAM'=$cs.totalphysicalmemory

                     'Processors'=$cs.numberofprocessors

                     'BIOS Serial'=$bios.serialnumber}

 

  $obj = New-Object -TypeName PSObject -Property $props

  Write-Output $obj

}

The function uses the Get-WMIObject cmdlet to retrieve information from three WMI classes on the specified computer. We always want to write objects to the pipeline, so we’re using New-Object to write a custom object to the pipeline, and using a hash table of properties culled from the three WMI classes. Normally, we prefer that property names do not have any spaces; but, because we’re going to be using this in a larger reporting context, we’ll bend the rules a bit.

Producing an HTML fragment

Now we can use our newly created Get-CSInfo function to create an HTML fragment:

$frag1 = Get-CSInfo –computername SERVER2 |

ConvertTo-Html -As LIST -Fragment -PreContent '<h2>Computer Info</h2>' |

Out-String

This little trick took us a while to figure out, so it’s worth examining.

  1. We’re saving the final HTML fragment into a variable named $frag1. That’ll let us capture the HTML content and later insert it into the final file.
  2. We’re running Get-CSInfo and giving it the computer name we want to inventory. For right now, we’re hardcoding the SERVER2 computer name. We’ll change that to a parameter a bit later.
  3. We’re asking ConvertTo-HTML to display this information in a vertical list, rather than in a horizontal table (which is what it would do by default). The list will mimic the layout from the old “bad way of doing things” report.
  4. We used the PreContent switch to add a heading to this section of the report. We added the <h2> HTML tags so that the heading will stand out a bit.
  5. The whole thing—and this was the tricky part—is piped to Out-String. You see, ConvertTo-HTML puts strings, collections of strings…all kinds of wacky stuff into the pipeline. All of that will cause problems later when we try to assemble the final HTML page. So we’re getting Out-String to resolve everything into plain old strings.

We can also produce the second fragment. This is a bit easier because we don’t need to write our own function first, but the HTML part will look substantially the same. In fact, the only real difference is that we’re letting our data be assembled into a table, rather than as a list.

$frag2 = Get-WmiObject -Class Win32_LogicalDisk -Filter 'DriveType=3' `

         -ComputerName SERVER2 |

         Select-Object @{name='Drive';expression={$_.DeviceID}},

              @{name='Size(GB)';expresssion={$_.Size / 1GB -as [int]}},

              @{name='FreeSpace(GB)';expression={

              $_.freespace / 1GB -as [int]}} |

ConvertTo-Html -Fragment -PreContent '<h2>Disk Info</h2>' |

Out-String

We now have two HTML fragments, $frag1 and $frag2, so we’re ready to assemble the final page.

Assembling the final HTML page

Assembling the final page simply involves adding our two existing fragments—although, we’re also going to embed a style sheet. Using cascading style sheet (CSS) language is a bit beyond the scope of this blog, but this example will give you a basic idea of what it can do. This embedded style sheet lets us control the formatting of the HTML page, so that it looks a little nicer. If you’d like a good tutorial and reference for CSS, check out CSS Tutorial at w3schools.com.

$head = @'

<style>

body { background-color:#dddddd;

       font-family:Tahoma;

       font-size:12pt; }

td, th { border:1px solid black;

         border-collapse:collapse; }

th { color:white;

     background-color:black; }

table, tr, td, th { padding: 2px; margin: 0px }

table { margin-left:50px; }

</style>

'@

 

ConvertTo-HTML -head $head -PostContent $frag1,$frag2 `

-PreContent "<h1>Hardware Inventory for SERVER2</h1>"

We’ve put that style sheet into the variable $head, using a Here-String to type the entire CSS syntax we wanted. That gets passed to the Head parameter, our HTML fragments to the PostContent parameter, and we couldn’t resist adding a header for the whole page, where we’ve again hardcoded a computer name (SERVER2).

We saved the entire script as C:\Good.ps1, and ran it like this:

./good > Report.htm

That directs the output HTML to Report.htm, which is incredibly beautiful and shown here.

Image of command output

Okay, maybe it’s no work of art, but it’s highly functional; and frankly, it looks better than the on-screen-only report we started with in this blog. Listing 2 shows the completed script, where we’ve swapped out the hardcoded computer name for a script-wide parameter that defaults to the local host. Notice that we’ve also included the [CmdletBinding()] declaration at the top of the script, enabling the Verbose parameter. We’ve used Write-Verbose to document what each step of the script is doing.

Listing 2: An HTML inventory report script

<#

.DESCRIPTION

Retrieves inventory information and produces HTML

.EXAMPLE

./Good > Report.htm

.PARAMETER

The name of a computer to query. The default is the local computer.

#>

 

[CmdletBinding()]

param([string]$computername=$env:computername)

 

# function to get computer system info

function Get-CSInfo {

  param($computername)

  $os = Get-WmiObject -Class Win32_OperatingSystem -ComputerName $computername

  $cs = Get-WmiObject -Class Win32_ComputerSystem -ComputerName $computername

  $bios = Get-WmiObject -Class Win32_BIOS -ComputerName $computername

  $props = @{'ComputerName'=$computername

             'OS Version'=$os.version

             'OS Build'=$os.buildnumber

             'Service Pack'=$os.sevicepackmajorversion

             'RAM'=$cs.totalphysicalmemory

             'Processors'=$cs.numberofprocessors

             'BIOS Serial'=$bios.serialnumber}

 

  $obj = New-Object -TypeName PSObject -Property $props

  Write-Output $obj

}

 

Write-Verbose 'Producing computer system info fragment'

$frag1 = Get-CSInfo -computername $computername |

ConvertTo-Html -As LIST -Fragment -PreContent '<h2>Computer Info</h2>' |

Out-String

 

Write-Verbose 'Producing disk info fragment'

$frag2 = Get-WmiObject -Class Win32_LogicalDisk -Filter 'DriveType=3' `

         -ComputerName $computername |

Select-Object @{name='Drive';expression={$_.DeviceID}},

              @{name='Size(GB)';expression={$_.Size / 1GB -as [int]}},

        @{name='FreeSpace(GB)';expression={$_.freespace / 1GB -as [int]}} |

ConvertTo-Html -Fragment -PreContent '<h2>Disk Info</h2>' |

Out-String

 

Write-Verbose 'Defining CSS'

$head = @'

<style>

body { background-color:#dddddd;

       font-family:Tahoma;

       font-size:12pt; }

td, th { border:1px solid black;

         border-collapse:collapse; }

th { color:white;

     background-color:black; }

table, tr, td, th { padding: 2px; margin: 0px }

table { margin-left:50px; }

</style>

'@

 

Write-Verbose 'Producing final HTML'

Write-Verbose 'Pipe this output to a file to save it'

ConvertTo-HTML -head $head -PostContent $frag1,$frag2 `

-PreContent "<h1>Hardware Inventory for $ComputerName</h1>"

 

Now that’s a script you can build upon! And the script is very easy to use.

PS C:\> $computer = SERVER01

PS C:\> C:\Scripts\good.ps1 -computername $computer |

>> Out-File "$computer.html"

>> 

PS C:\> Invoke-Item "$computer.html"

The script runs, produces an output file for future reference, and displays the report. Keep in mind that our work in building the Get-CSInfo function is reusable. Because that function outputs an object and not only pure text, you could repurpose it in a variety of places where you might need the same information.

To add to this report, you’d simply:

  1. Write a command or function that generates a single object that contains all the information you need for a new report section.
  2. Use that object to produce an HTML fragment, and store it in a variable.
  3. Add that new variable to the list of variables in the script’s last command, thus adding the new HTML fragment to the final report.
  4. Sit back and relax.

Yes, this report is text. Ultimately, every report will be, because text is what we humans read. The point of this one is that everything stays as Windows PowerShell-friendly objects until the last possible instance. We let Windows PowerShell, rather than our own fingers, format everything for us. The actual working bits of this script, which retrieve the information we need, could easily be copied and pasted and used elsewhere for other purposes. That wasn’t as easy to do with our original pure-text report, because the actual working code was so embedded with all of that formatted text.

Building reports is certainly a common need for administrators, and Windows PowerShell is well suited to the task. The trick, we feel, is to produce reports in a way that makes the reports’ functional code (the bits that retrieve information and so forth) somewhat distinct from the formatting- and output-creation code. In fact, Windows PowerShell is generally capable of delivering great formatting with very little work on your part, as long as you work the way it needs you to.

Here is the code for the discount offer today at www.manning.com: scriptw1
Valid for 50% off PowerShell in Depth and SQL Server DMVs in Action
Offer valid from April 1, 2013 12:01 AM until April 2 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Pipelined Expressions

$
0
0

Summary: Windows PowerShell , Jeff Wouters, shares an excerpt from his chapter in the book, PowerShell Deep Dives.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell Deep Dives
     Edited by Jeffery Hicks, Richard Siddaway, Oisin Grehan, and Aleksandar Nikolic.

Photo of book cover

Whether you just started using Windows PowerShell, or you are at a more advanced level, there are two things you should always look at while writing a script: performance and execution time. With the introduction of Windows PowerShell 3.0, there are a lot of new modules and cmdlets available to you. What a lot of people don’t realize is that Microsoft also improves and expands already existing modules and their cmdlets. This is especially the case in Windows PowerShell 3.0. In this blog, based on Chapter 10 in PowerShell Deep Dives, author Jeff Wouters discusses one of the most powerful features of Windows PowerShell—the ability to utilize the pipeline.

And now, here’s Jeff…

Finding objects, filtering them down to the ones you want, and performing an action on them can be done very easily by using pipelined expressions, which I refer to as “the pipeline.” Every step is one pipe in the pipeline.  In general, the fewer pipes you use, the shorter the execution time will be, and the fewer resources are used. I’ll illustrate this later.

My problem is that I tend to put everything in a one-liner. Although creating one-liners is fairly easy to learn and understand, it does have some best practices to gain the best performance. If you don’t use the best practices, your script may still work, but you will experience some negative performance and long execution times. Especially with Windows PowerShell 3.0, where lots of new modules, cmdlets, parameters, methods, and member enumeration are introduced, it becomes more important to use the parameters and the pipeline in the most efficient way.

When writing scripts, I always keep my goal in mind: to complete the task at hand in the most efficient way. You are able to combine parameters so you won’t have long commands where objects are piped from one cmdlet to another. This will result in better performance and lower execution times for your scripts. As a secondary result, many times it will also result in less code.

Requirements

To use pipelined expressions, you need the ability to execute Windows PowerShell code. There are a few ways to accomplish this:

  • At a Windows PowerShell prompt
  • Through a scripting editor that supports Windows PowerShell and allows for code execution to test your code, including the ability to view the output of your code
  • By executing Windows PowerShell scripts manually

To measure the execution time for each command, I’ve used the Measure-Command cmdlet, like so:

PS D:\> Measure-Command {Get-WmiObject -Class win32_bios -Property manufacturer | Where-Object {$_.Manufacturer -eq "Hewlett-Packard"}}

 

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 0

Milliseconds      : 131

Ticks             : 1315776

TotalDays         : 1,52288888888889E-06

TotalHours        : 3,65493333333333E-05

TotalMinutes      : 0,00219296

TotalSeconds      : 0,1315776

TotalMilliseconds : 131,5776

I will give you the execution times in my environment of the commands provided to show you the benefits of doing it another way.

Pipeline—Rules of Engagement

When I began using PowerShell, I was introduced to the pipeline immediately. I saw how easy it was and I started to pipe everything together, never looking at the execution times or performance impact. Over the last few years, I’ve learned some basic best practices that enabled me to end up with a fraction of the execution time compared to my previous scripts.

Here is an example of what you can accomplish with this:

I wrote a script to provision 1500 user objects in Active Directory by using a CSV file with more than 25 properties defined per user and to make them members of the appropriate groups. This script used to take about 12 minutes to execute, and now it takes somewhere between 55 and 60 seconds. Of course, this depends on the Active Directory server, but you get the idea.

I’ll cover these best practices one by one and elaborate on them.

What is the pipeline?

Before going into the pipeline rules I’ve mentioned, it can be useful to take a look at the pipeline itself. What is the pipeline? A pipeline uses a technique called piping. In simple terms, it is the ability to pass objects from one command to the next. One way of doing this is as follows (in order): get all processes, filter based on the name of the process, and then stop the process.

Get-Process | Where-Object {$_.Name –eq "notepad"} | Stop-Process

Execution time: 61 milliseconds.

What happens here? First, all objects (in this case processes) are received by the Get-Process cmdlet. Those objects are piped to the Where-Object cmdlet where the objects are filtered based on their name. Only the processes with the name “notepad” are piped to the Stop-Process, which in turn actually stops the processes.

Filtering objects

Rule : Filter as early as possible.

You may encounter situations where your code must handle large numbers of objects. In these cases, you will need to filter that list of objects to gain the best performance. In other words, when you put a filter on a list of objects, only the ones that comply with your filter will be shown.

The Get-Process cmdlet has a Name parameter that you can use. This allows you to filter based on the name, but without having to use the Where-Object cmdlet:

Get-Process –Name notepad | Stop-Process

Execution time: 61 milliseconds.

So, all processes are received and filtered by the Get-Process cmdlet. Only then are they piped to the Stop-Process cmdlet. Doing it this way means that the number of objects (processes) passed from the first to the second pipe is significantly less compared to the first example. It also reduces the pipeline to one pipe. This allows for shorter execution times and less resource utilization.

So I’ve shown you how you can filter on object properties already, but let’s take a deeper look at this.

Where-Object

There are two ways to filter down a list of objects to end up with the ones you need. The first way is to use the Where-Object cmdlet in the pipeline. Let’s take an example where you would need to get all files with the .docx or .txt extensions and with “PowerShell” in their names:

PS D:\> Get-ChildItem -Recurse | Where-Object {(($_.Extension -eq ".docx") –or ($_.Extension –eq ".txt")) –and ($_.Name –like "*PowerShell*")}

 

    Directory: D:\

 

Mode           LastWriteTime   Length  Name

----           -------------   ------  ----

-a---     12-9-2012    10:36   510229  PowerShell ft Hyper-V.docx

-a---     12-9-2012    10:36   8233    PowerShell ft Hyper-V Notes.txt
-a---     2-9-2012     16:24   433672  PowerShell Deep Dives.docx

-a---     2-9-2012     16:24   1285    PowerShell Deep Dives Notes.txt
-a---     21-6-2012    00:52   306913  Practical PowerShell.docx

-a---     21-6-2012    00:52   9835    Practical PowerShell Notes.txt

Execution time: 162 milliseconds.

As you can see, this is done by using a pipelined expression. However, in this case there is a more efficient way to accomplish this: by using the parameters attached to the Get-ChildItem cmdlet. When you take a look at the parameters offered by this cmdlet, you’ll find the Include and Filter parameters. So let’s use those instead of the pipeline:

PS D:\> Get-ChildItem -Recurse –Include *.docx, *.txt –Filter *PowerShell*

 

    Directory: D:\

 

Mode           LastWriteTime   Length  Name

----           -------------   ------  ----

-a---     12-9-2012    10:36   510229  PowerShell ft Hyper-V.docx

-a---     12-9-2012    10:36   8233    PowerShell ft Hyper-V Notes.txt
-a---     2-9-2012     16:24   433672  PowerShell ft Windows.docx

-a---     2-9-2012     16:24   1285    PowerShell ft Windows Notes.txt
-a---     21-6-2012    00:52   306913  Practical PowerShell.docx

-a---     21-6-2012    00:52   9835    Practical PowerShell Notes.txt

Execution time: 82 milliseconds.

As you can see, it is possible to get the same output without using the pipeline.

In Windows PowerShell 3.0, the Get-ChildItem cmdlet also comes with File and Directory parameters, which allow you to filter for only files or directories. So, if you’re only looking for files, using the File parameter would decrease the execution time of the command because directories are skipped entirely.

This is why I always find it useful to know what parameters are offered, and if I don’t know, the Get-Help cmdlet saves the day.

Parameters vs. Where-Object

Sometimes cmdlets have parameters that can filter the objects, and therefore, completely avoid the pipeline. The following is how you could filter a list of objects based on a condition—in this case, the value of the Manufacturer property:

PS D:\> Get-WmiObject -Class win32_bios -Property manufacturer | Where-Object {$_.Manufacturer –eq "Hewlett-Packard"}

 

__GENUS          : 2

__CLASS          : Win32_BIOS

__SUPERCLASS     :

__DYNASTY        :

__RELPATH        :

__PROPERTY_COUNT : 1

__DERIVATION     : {}

__SERVER         :

__NAMESPACE      :

__PATH           :

Manufacturer     : Hewlett-Packard

PSComputerName   :

 

Execution time: 82 milliseconds.

There is, however, a more efficient way of doing this. The Get-WmiObject parameter offers you the Query parameter. You can use this parameter to search for the object and show it, based on a condition set for the value of the Manufacturer property: 

PS D:\> Get-WMIObject -Query "SELECT Manufacturer FROM Win32_BIOS WHERE Manufacturer=’Hewlett-Packard’"

 

__GENUS          : 2

__CLASS          : Win32_BIOS

__SUPERCLASS     :

__DYNASTY        :

__RELPATH        :

__PROPERTY_COUNT : 1

__DERIVATION     : {}

__SERVER         :

__NAMESPACE      :

__PATH           :

Manufacturer     : Hewlett-Packard

PSComputerName   :

Execution time: 27 milliseconds.

Filtering this way is faster and uses fewer resources. More importantly, it uses the Windows PowerShell System Provider for WMI.

Properties

When you’re done filtering the objects, you still have all of the properties attached to them. This is a lot of information-consuming resources that you may not even need. It can slow your script and system down—so that’s not desired. So how can you clean this up?

This is where the Select-Object cmdlet and the pipeline come into play: 

PS D:\> Get-ChildItem -Recurse –Include *.pdf, *.txt –Filter *PowerShell* | Select-Object LastWriteTime, Name

 

    Directory: D:\

 

     LastWriteTime  Name

     -------------  ----

12-9-2012    10:36  PowerShell ft Hyper-V.docx

12-9-2012    10:36  PowerShell ft Hyper-V Notes.txt
2-9-2012     16:24  PowerShell ft Windows.docx

2-9-2012     16:24  PowerShell ft Windows Notes.txt
21-6-2012    00:52  Practical PowerShell.docx

21-6-2012    00:52  Practical PowerShell Notes.txt

There isn’t another way to filter the number of objects to leave only the ones you want. Select-Object is the way to go here.

Piping is one of the best and most powerful features in Windows PowerShell. This blog showed you how to utilize commands, parameters, and the pipeline in the most efficient way.

Here is the code for the discount offer today at www.manning.comscriptw2
Valid for 50% off PowerShell Deep Dives and SQL Server Deep MVP Dives

Offer valid from April 2, 2013 12:01 AM until April 3 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Windows Software Update Services

$
0
0

Summary: Honorary Scripting Guy, Boe Prox, shares an excerpt from his contribution to the book, PowerShell Deep Dives.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell Deep Dives
     Edited by Jeffery Hicks, Richard Siddaway, Oisin Grehan, and Aleksandar Nikolic.

Photo of book cover

Whether you just started using Windows PowerShell or are at a more advanced level, there are two things you should always look at while writing a script: performance and execution time. With the introduction of Windows PowerShell 3.0 there are a lot of new modules and cmdlets available to you. What a lot of people don’t realize is that Microsoft also improves and expands already existing modules, and their cmdlets. This is especially the case with Windows PowerShell 3.0.

Here’s Boe…

Although the UI can be clunky and slow, there is an API with WSUS, a new module that is available in Windows Server 2012, and even an open source WSUS module that I wrote called PoshWSUS, which can more effectively allow you to quickly manage and generate reports by using Windows PowerShell.

Instead of looking at the existing cmdlets available in the 2012 Windows Server Update Services module, I will dive into the API and show you some tricks to further extend the reach of Windows PowerShell into WSUS. I’ll show how to:

  • Look at the WSUS configuration and events.
  • Provide reporting on various client and patch statuses.
  • Start and view synchronization progress and history.
  • Viewing and create automatic installation rules to simplify patch management by approving the common updates that your clients require.

WSUS server configuration and events

In WSUS, some of the most basic administration concepts are client management and patch management. Before Windows Server 2012, the only two solutions to this were working with the UI or digging into the API via scripts or the open source module, PoshWSUS. With Server 2012, we now have a WSUS module called UpdateServices that makes managing clients easier. The UpdateServices module is only available to use on the WSUS server, which isn’t bad if you plan to use Windows PowerShell remoting to manage the server. If you are not running Windows Server 2012, the module will not be available, and you need to use the API to manage a remote WSUS server.

Initial connection

To make a connection to the WSUS server locally or remotely with the API, you need the WSUS Administration console installed on the system that you will be making the connection from. After the console has been installed, you will have access to the required assemblies that you can the load and use for the WSUS connection. So with that, lets load the assembly, and then we can make the initial connection to the WSUS server.

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null

For the connection attempt, I will be using the Microsoft.UpdateServices.Administration.AdminProxy class along with the GetUpdateServer() method. This method accepts 1 of 3 parameter sets based on your WSUS configuration and if it is a remote or local connection. For the remote connection that I will be making, I need only supply the remote system name, a Boolean value that will say whether the connection is secure, and the remote port that I need to connect to on the WSUS server. (Acceptable ports for WSUS are 8080 and 8530 for non-secure ports and 443 and 8531 for SSL.)

$Wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(

    "Boe-PC",

    $False,

    "8530"

)

$Wsus

 

WebServiceUrl                    : http://BOE-PC:8530/ApiRemoting30/WebService.asmx

BypassApiRemoting                : False

IsServerLocal                    : True

Name                             : BOE-PC

Version                          : 6.2.9200.16384

IsConnectionSecureForApiRemoting : True

PortNumber                       : 8530

PreferredCulture                 : en

ServerName                       : BOE-PC

UseSecureConnection              : False

ServerProtocolVersion            : 1.8

From here, you can see what version of the WSUS software you are running, among other things. The most important thing here is that you can now see that we have successfully connected to the WSUS server.

Viewing WSUS configuration

After the initial connection has been made, you can easily take a look at the internal configuration settings of the WSUS server by using the GetConfiguration()method of the Microsoft.UpdateServices.Internal.BaseApi.UpdateServer object.

$wsus.GetConfiguration()

 

UpdateServer                                 : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

LastConfigChange                             : 9/17/2012 2:22:43 AM

ServerId                                     : 64ad0f03-e81d-4539-883d-0c08066d1e82

SupportedUpdateLanguages                     : {he, cs, fr, es...}

TargetingMode                                : Server

SyncFromMicrosoftUpdate                      : True

IsReplicaServer                              : False

HostBinariesOnMicrosoftUpdate                : False

UpstreamWsusServerName                       :

UpstreamWsusServerPortNumber                 : 8530

UpstreamWsusServerUseSsl                     : False

UseProxy                                     : False

ProxyName                                    :

ProxyServerPort                              : 80

UseSeparateProxyForSsl                       : False

SslProxyName                                 :

SslProxyServerPort                           : 443

AnonymousProxyAccess                         : True

ProxyUserName                                :

ProxyUserDomain                              :

HasProxyPassword                             : False

AllowProxyCredentialsOverNonSsl              : False

This is just a marginal number of the 121 properties that are returned with this method. The majority of these properties are settable, meaning that you can easily update these from Windows PowerShell. Use caution when making any changes to the properties because it could leave your server in an unusable state!

Viewing the WSUS database connection

You can take a look at the database connection and the database properties from your WSUS server by using the GetDatabaseConfiguration() method and the CreateConnection() method from the created Microsoft.UpdateServices.Internal.DatabaseConfiguration object.

$wsus.GetDatabaseConfiguration()

 

UpdateServer                   : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

ServerName                     : MICROSOFT##WID

DatabaseName                   : SUSDB

IsUsingWindowsInternalDatabase : True

AuthenticationMode             : WindowsAuthentication

UserName                       :

Password                       :

 

$wsus.GetDatabaseConfiguration().CreateConnection()

 

QueryTimeOut      : 150

LoginTimeOut      : 60

ConnectionPooling : True

ApplicationName   : WSUS:powershell:1824

UserLoginName     :

UseIntegrated     : True

ConnectionString  :

MaxPoolSize       : 100

DoRetry           : False

DefaultRetryTimes : 3

ServerName        : MICROSOFT##WID

DatabaseName      : SUSDB

Password          :

IsConnected       : False

InTransaction     : False

The amount of detail that you can get regarding the database is pretty nice. In fact, you could dive even deeper into the database if you wanted, but that is beyond the scope of this blog.

Viewing WSUS event history

 If you are interested in viewing the event history of the WSUS server, it can be reached by calling the GetUpdateEventHistory(StartDate,EndDate)method and supplying a Start Date and an End Date. In this case, I just want to look at the events that have occurred during the past hour.

$wsus.GetUpdateEventHistory("$((Get-Date).AddHours(-1))","$(Get-Date)")

 

UpdateServer          : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

HasAssociatedUpdate   : False

UpdateId              : Microsoft.UpdateServices.Administration.UpdateRevisionId

HasAssociatedComputer : False

ComputerId            :

Status                : Unknown

WsusEventId           : ContentSynchronizationSucceeded

WsusEventSource       : Server

Id                    : f01cb84f-9a0b-4da8-a12a-39a6866c5787

CreationDate          : 9/23/2012 7:08:20 PM

Message               : Content synchronization succeeded.

IsError               : False

ErrorCode             : 0

Row                   : Microsoft.UpdateServices.Internal.DatabaseAccess.EventHistoryTableRow

 

UpdateServer          : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

HasAssociatedUpdate   : True

UpdateId              : Microsoft.UpdateServices.Administration.UpdateRevisionId

HasAssociatedComputer : False

ComputerId            :

Status                : Unknown

WsusEventId           : ContentSynchronizationFileDownloadSucceeded

WsusEventSource       : Server

Id                    : 0c7ade08-87d6-4019-b676-0f50ce486591

CreationDate          : 9/23/2012 7:08:20 PM

Message               : Content file download succeeded. Digest:  Source File: /msdownload/update/v3-19990518/cabpool/w

                        indowsinstaller-kb893803-v2-x86_830994754ba721add8a13bd0266d2e092f21cab0.exe Destination File:

                        F:\WsusContent\B0\830994754BA721ADD8A13BD0266D2E092F21CAB0.exe.

IsError               : False

ErrorCode             : 0

Row                   : Microsoft.UpdateServices.Internal.DatabaseAccess.EventHistoryTableRow

With this information, you could audit for any possible failures that have occurred with a recent synchronization or some other issue that might be cause for a WSUS issue.

Automatic Approval Rules

With WSUS, you can automate your patch approvals simply by creating and configuring Automatic Approval Rules. You can specify categories, target groups among other things to use for the rules.

Locating approval rules

To find out what approval rules are currently on the WSUS server, use the GetApprovalRules() method from the Microsoft.UpdateServices.Internal.BaseApi.UpdateServer object created from the initial connection.

$wsus.GetInstallApprovalRules()

 

 UpdateServer   : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

Id             : 2

Name           : Default Automatic Approval Rule

Enabled        : False

Action         : Install

Deadline       :

CanSetDeadline : True

This is not actually all of the information for the approval rules. To find out what Target Groups, Classifications, and Categories are contained in the Microsoft.UpdateServices.Internal.BaseApi.AutomaticUpdateApprovalRule object, you need to use the GetComputerTargetGroups(), GetUpdateClassifications(), and GetUpdateCategories() methods, respectively.

$approvalRules = $wsus.GetInstallApprovalRules()

 

#Get the Update Classifications

$wsus.GetInstallApprovalRules()[0].GetUpdateClassifications()

UpdateServer              : Microsoft.UpdateServices.Internal.BaseApi.UpdateSer

                            ver

Id                        : e6cf1350-c01b-414d-a61f-263d14d133b4

Title                     : Critical Updates

Description               : A broadly released fix for a specific problem

                            addressing a critical, non-security related bug.

ReleaseNotes              :

DefaultPropertiesLanguage :

DisplayOrder              : 2147483647

ArrivalDate               : 9/23/2012 6:51:37 PM

 

UpdateServer              : Microsoft.UpdateServices.Internal.BaseApi.UpdateSer

                            ver

Id                        : 0fa1201d-4330-4fa8-8ae9-b877473b6441

Title                     : Security Updates

Description               : A broadly released fix for a product-specific

                            security-related vulnerability. Security

                            vulnerabilities are rated based on their severity

                            which is indicated in the Microsoft® security

                            bulletin as critical, important, moderate, or low.

ReleaseNotes              :

DefaultPropertiesLanguage :

DisplayOrder              : 2147483647

ArrivalDate               : 9/23/2012 6:40:34 PM

 

#Get the Computer Target Groups

$wsus.GetInstallApprovalRules()[0].GetComputerTargetGroups()

UpdateServer               Id                         Name                    

------------               --                         ----                    

Microsoft.UpdateService... a0a08746-4dbe-4a37-9adf... All Computers  

 

#Get the Categories

$wsus.GetInstallApprovalRules()[0].GetCategories()

Type                      : Product

ProhibitsSubcategories    : True

ProhibitsUpdates          : False

UpdateSource              : MicrosoftUpdate

UpdateServer              : Microsoft.UpdateServices.Internal.BaseApi.UpdateSer

                            ver

Id                        : a105a108-7c9b-4518-bbbe-73f0fe30012b

Title                     : Windows Server 2012

Description               : Windows Server 2012

ReleaseNotes              :

DefaultPropertiesLanguage :

DisplayOrder              : 2147483647

ArrivalDate               : 9/23/2012 6:47:20 PM

Creating approval rules

Creating an approval is a simple process that involves first creating the approval object with a name and then filling in the blanks for the rest of the configuration (Target Groups, Categories, Classifications, and so on) on the object before deploying it on the server. First, verify that the rule I am going to create (named “2012Servers”) doesn’t exist:

#Look at current rules

$wsus.GetInstallApprovalRules()

 

UpdateServer   : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

Id             : 2

Name           : Default Automatic Approval Rule

Enabled        : False

Action         : Install

Deadline       :

CanSetDeadline : True

No Rules exist with the name I plan to use, so I can continue with the creation of the new Approval Rule.

Listing 1: Creating an Approval Rule

 [cmdletbinding()]

Param (

  [parameter(ValueFromPipeline=$True,Mandatory=$True,

  HelpMessage="Name of WSUS server to connect to.")]

  [Alias('WSUSServer')]   

  [string]$Computername,

  [parameter()]

  [Switch]$UseSSL

)

[reflection.assembly]::LoadWithPartialName(

  "Microsoft.UpdateServices.Administration"

) | out-null

$Wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(

  $Computername,$UseSSL,$Port

)

 

#Create New Rule Object

$newRule = $wsus.CreateInstallApprovalRule("2012Servers")

 

##Categories

#Get Categories for Windows Server

$updateCategories = $wsus.GetUpdateCategories() | Where {

  $_.Title -LIKE "Windows Server 2012*"

}

 

#Create collection for Categories

$categoryCollection = New-Object Microsoft.UpdateServices.Administration.UpdateCategoryCollection

$categoryCollection.AddRange($updateCategories)

 

#Add the Categories to the Rule

$newRule.SetCategories($categoryCollection)

 

##Classifications

#Get all Classifications for specific Classifications

$updateClassifications = $wsus.GetUpdateClassifications() | Where {

  $_.Title -Match "Critical Updates|Service Packs|Updates|Security Updates"

}

 

#Create collection for Categories

$classificationCollection = New-Object Microsoft.UpdateServices.Administration.UpdateClassificationCollection

$classificationCollection.AddRange($updateClassifications )

 

#Add the Classifications to the Rule

$newRule.SetUpdateClassifications($classificationCollection)

 

##Target Groups

#Get Target Groups required for Rule

$targetGroups = $wsus.GetComputerTargetGroups() | Where {

  $_.Name -Match "All Computers"

}

 

#Create collection for TargetGroups

$targetgroupCollection = New-Object Microsoft.UpdateServices.Administration.ComputerTargetGroupCollection

$targetgroupCollection.AddRange($targetGroups)

 

#Add the Target Groups to the Rule

$newRule.SetComputerTargetGroups($targetgroupCollection)

 

#Finalize the creation of the rule object

$newRule.Enabled = $True

$newRule.Save()

 

#Run the rule

$newRule.ApplyRule()

 

Let’s make sure that the rule is now created.

 

$wsus.GetInstallApprovalRules()

 

UpdateServer   : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

Id             : 2

Name           : Default Automatic Approval Rule

Enabled        : False

Action         : Install

Deadline       :

CanSetDeadline : True

 

UpdateServer   : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer

Id             : 6

Name           : 2012Servers

Enabled        : True

Action         : Install

Deadline       :

CanSetDeadline : True

Now we have a new Approval Rule that will run to approve only the updates I specified to only Windows Server 2012 systems. Keep in mind that the automatic approval rules only run after WSUS synchronizes, and only synched updates will be eligible for the rule unless you run the rule manually.

In today’s blog, I showed a number of things that you can do with Windows PowerShell to manage your WSUS server by using the available APIs, such as looking at the configuration settings of the WSUS server and auditing events. Auditing and building Automatic Approval Rules and providing more detailed reporting is easily accomplished by using the APIs.

With Windows Server 2012, you do have the option of using the Update Services module to perform some basic WSUS administration such as patch approvals, but for more advanced configurations and reporting, the APIs are definitely the way to go.

There is also a module that I wrote called PoshWSUS that provides cmdlets that allow for a more advanced administration. With multiple options for automating your WSUS server, you can’t go wrong. If you write some scripts for your own WSUS server, I hope that you will share those with the rest of the community. 

Here is the code for the discount offer today at www.manning.comscriptw2
Valid for 50% off PowerShell Deep Dives and SQL Server Deep MVP Dives

Offer valid from April 2, 2013 12:01 AM until April 3 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy  

Excel Spreadsheets

$
0
0

Summary: Microsoft MVP, Richard Siddaway, shares an excerpt from his book, PowerShell in Practice.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell in Practice
  By Richard Siddaway

Photo of book cover

It’s a fair assumption to say that the Microsoft Office applications will be found on almost every desktop computer in work environments. It’s possible to work with most of the Office applications by using Windows PowerShell. There are COM objects representing most of them. In this technique from PowerShell in Practice, author Richard Siddaway shows how to create an Excel spreadsheet and add data to it, and how to open a CSV file in Excel, from anywhere you are without triggering a pre-2007 bug.

In this set of tips, we’ll concentrate on using Excel because this is one of the applications we’re most likely to use as administrators. The Microsoft Script Center has a lot of VBScript script examples of using Excel that can be converted to Windows PowerShell. The first thing we need to do is to create an Excel spreadsheet, and spreadsheets seem much more useful when they have data in them.

Creating a spreadsheet

Creating an Excel spreadsheet should be a simple act, in theory. But if you don’t happen to be in the U.S., there’s a slight issue in the shape of a bug in versions of Excel 2007 and earlier that can prevent this from working. After reading this, it won’t matter where you live. If you’re using Excel 2010, the first version in Listing 1 can be used wherever you live and work.

Problem

We need to create an Excel spreadsheet from within a Windows PowerShell script.

Solution

The Excel.application COM object can be used to create a spreadsheet.

Listing 1: Create Excel spreadsheet

$xl = New-Object -ComObject "Excel.Application"    1

$xl.visible = $true

$xlbooks =$xl.workbooks.Add()

 

$xl = New-Object -ComObject "Excel.Application"    2

$xl.visible = $true

$xlbooks =$xl.workbooks

$newci = [System.Globalization.CultureInfo]"en-US"

$xlbooks.PSBase.GetType().InvokeMember("Add",

[Reflection.BindingFlags]::

InvokeMethod, $null, $xlbooks, $null, $newci)

1 U.S. version

2 International version

Discussion

If you live in the U.S. or are using a computer that’s configured to the U.S. locale, you can use the first option in Listing 1. Otherwise, you have to use the second, international option. (See the Regional and Language settings in Control Panel, as shown in the following screenshot.)

Image of menu

If you want to remain with Windows PowerShell rather than succumbing to the GUI, you can check the culture by typing $psculture (in Windows PowerShell 2.0). If en-US isn’t returned, you need to use the second option in Listing 1.

The simple way to create a spreadsheet starts by creating the COM object by using New-Object. We make it visible. Administrators are clever people, but working on an invisible spreadsheet may be a step too far…especially on a Monday morning. At this point, we have only the Excel application open. We need to add a workbook to enable us to use the spreadsheet.

If the computer isn’t using the U.S. culture (I live in England so $psculture returns en-GB), we have two options. The first option is to change the culture on the computer to en-US, which isn’t convenient. Otherwise, we have to use the second option given in the listing.

We start in the same way by creating the COM object and making the spreadsheet visible. A variable $xlbooks, which represents the workbooks in the spreadsheet is created. A second variable $newci, which represents the culture is created. Note that we’re forcing the culture used to create the workbook to be U.S. English. The last line is a bit complicated, but we’re dropping down into the base workbook object and invoking the Add method using the U.S. English culture. If you don’t want to see the long list of data on screen when this last line is run, add | Out-Null to the end of the line. This is awkward, but it does get us past the bug. The good news is that, once we’ve created our workbook, we can add data into it.

Adding data to a spreadsheet

A spreadsheet without data isn’t much use to us, so we need to investigate how we can add data into the spreadsheet and perform calculations on that data.

Problem

We need to populate our spreadsheet with some data.

Solution

Expanding on the previous script, we can create a worksheet to hold the data. The starting point is to remove any previous versions of spreadsheet #1, as shown in Listing 2. We use Test-Path to determine whether the file exists and Remove-Item to delete it. The Confirm parameter could be used with Remove-Item as an additional check if required. This is useful if working with important data. 

Listing 2: Add data to Excel spreadsheet 

$sfile = "C:\test\test.xlsx"

if(Test-Path $sfile){Remove-Item $sfile}             1

 

$xl = New-Object -comobject "Excel.Application"

$xl.visible = $true

$xlbooks =$xl.workbooks

$newci = [System.Globalization.CultureInfo]"en-US"

$wkbk = $xlbooks.PSBase.GetType().InvokeMember("Add",

[Reflection.BindingFlags]

::InvokeMethod, $null, $xlbooks, $null, $newci)

$sheet = $wkbk.WorkSheets.Item(1)                    2

 

$sheet.Cells.Item(1,1).FormulaLocal = "Value"        3

$sheet.Cells.Item(1,2).FormulaLocal = "Square"       3

$sheet.Cells.Item(1,3).FormulaLocal = "Cube"         3

$sheet.Cells.Item(1,4).FormulaLocal = "Delta"        3

 

$row = 2                                             4

 

for ($i=1;$i -lt 25; $i++){                          5

 

    $f = $i*$i

 

    $sheet.Cells.Item($row,1).FormulaLocal = $i

    $sheet.Cells.Item($row,2).FormulaLocal = $f

    $sheet.Cells.Item($row,3).FormulaLocal = $f*$i

    $sheet.Cells.Item($row,4).FormulaR1C1Local = "=RC[-1]-RC[-2]"

 

    $row++

}

 

 [void]$wkbk.PSBase.GetType().InvokeMember("SaveAs",

[Reflection.BindingFlags]

::InvokeMethod, $null, $wkbk, $sfile, $newci)                  6

 

[void]$wkbk.PSBase.GetType().InvokeMember("Close",

[Reflection.BindingFlags]

::InvokeMethod, $null, $wkbk, 0, $newci)       7

$xl.Quit()                                              8

1 Delete previous files

2 Create spreadsheet

3 Set headers

4 Row counter

5 Create data

6 Save

7 Close

8 Quit

The next step is to create the spreadsheet. In this case, I’ve used the international method. After the workbook is created, we can create a worksheet (#2). Worksheet cells are referred to by the row and column, as shown by creating the column headers (#3).

A counter is created (#4) for the rows. A for loop (#5) is used to calculate the square and the cube of the loop index. This is a simple example to illustrate the point. In reality, the data could be something like the number of rows exported compared to the number of rows imported for each table involved in a database migration. Note that the difference between the square and the cube is calculated by counting back from the current column.

We save the spreadsheet when all of the data has been written to it (#6), and close the workbook (#7). Note that we have to use a similar construction to adding a workbook, in Excel 2007 and earlier, to get around the culture issue. If we were using the en-US culture, those lines would become:

$wkbk.SaveAs("$file")

$wkbk.Close()

The last action is to quit the application (#8).

Discussion

There are numerous reasons why you would want to record data into a spreadsheet but the performance implications must be understood.

Note   Adding data into an Excel spreadsheet in this manner can be extremely slow. In fact, painfully slow if a lot of data needs to be written into the spreadsheet. I strongly recommend creating a CSV file with the data and manually importing it into Excel instead of working directly with the spreadsheet.

This technique could be used to create reports, for instance from some of the WMI-based scripts we saw earlier. The computer name and relevant information could be written into the spreadsheet. Alternatively, we can write the data to a CSV file and then open it in Excel.

Opening a CSV file in Excel

We have seen how writing data directly into a spreadsheet is slow. Slow tends to get frustrating, so we need another way to get the data into a spreadsheet. If we can write the data to a CSV file, we can open that file in Excel. It’s much faster and more satisfying.

Problem

Having decided that we need to speed up creating our spreadsheet, we need to open a CSV file in Excel.

Solution

The Open method will perform this action, as shown in Listing 3. 

Listing 3: Open a CXV file

$xl = New-Object -comobject "excel.application"

$xl.WorkBooks.Open("C:\Scripts\Office\data.csv")

$xl.visible = $true

Discussion

As in the previous examples, we start by creating an object to represent the Excel application. We can then use the Open method of the workbooks to open the CSV file. The only parameter required is the path to the file. The full path has to be given. We then make the spreadsheet visible so we can work with it. Alternatively we could use:

Invoke-Item data.csv

This depends on the default action in the file associations to open the file in Excel. Hal Rottenberg graciously reminded me of this one.

The Microsoft Office applications are extremely widespread in the Windows environment. We can create and access documents using these applications in Windows PowerShell. This enables us to produce a reporting and documentation system for our computers based on using Windows PowerShell with WMI and COM.

~Richard

Here is the code for the discount offer today at www.manning.com: scriptw3
Valid for 50% off PowerShell in Practice and Learn Windows PowerShell in a Month of Lunches
Offer valid from April 3, 2013 12:01 AM until April 4 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Technique: Determining Replication Schedules

$
0
0

Summary: Windows PowerShell MVP, Richard Siddaway, shares another excerpt from his book PowerShell in Practice.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from PowerShell in Practice
  By Richard Siddaway

Photo of book cover

It’s a fair assumption to say that Microsoft Office applications will be found on almost every computer desktop in work environments. It’s possible to work with most of the Office applications by using Windows PowerShell. There are COM objects representing most of them. In this technique from PowerShell in Practice, author Richard Siddaway explains how to see replication schedules of a site link with more detail than with GUI tools.

When a site link is created, a replication interval (default 180 minutes) and a schedule (default 24 × 7) are created. The schedule controls when replication can start, not when replication can happen. If the schedule is set for only 1:00 to 2:00 A.M., replication can start during that period; but once started, it will continue until finished even if that goes beyond 2:00 A.M.

Accessing the schedule in the GUI is awkward in that Active Directory Sites and Services has to be opened, then we have to drill down into the transport mechanisms to find the site link, open its properties, and finally click the Schedule button. This will show the schedule on an hourly basis for each day of the week.

Additionally, we can’t use the InterSiteReplicationSchedule property, because if a schedule is set as 24 × 7, nothing shows when you list the InterSiteReplicationSchedule property. If it’s set to anything else, we get System.DirectoryServices.ActiveDirectory.ActiveDirectorySchedule returned instead of the actual schedule. Let’s write a script that will sort this out for us.

Problem

We want an easy way to see the replication schedules of our site link. Ideally we want the display to show more detail than the GUI tools.

Solution

We need to unravel the way Active Directory stores the schedule information to get to that display. The script to do so is shown in Listing 1.

Listing 1: Display replication schedule

$sched = @()    #1

$days = "Sunday", "Monday", "Tuesday",

"Wednesday", "Thursday", "Friday", "Saturday"    #1

hours = " "*11

 

for ($j=0; $j -le 23; $j++){$hours += "{0,-4}" -f $j}

$sched += $hours      #2

 

$for =

[System.DirectoryServices.ActiveDirectory.Forest]::

GetCurrentForest()    #3

 

$fortyp =

[System.DirectoryServices.ActiveDirectory.`

DirectoryContexttype]"forest"   #3

 

$forcntxt = New-Object

System.DirectoryServices.ActiveDirectory.`

DirectoryContext($fortyp, $for)       #3

 

$link =

[System.DirectoryServices.ActiveDirectory.`

ActiveDirectorySiteLink]::

FindByName($forcntxt, "MyNewSite3-MyNewSite4")     #4

 

for ($i=0; $i -le 6; $i++) { #days    #5

    $out = ""

    $out += $days[$i].PadRight(11)

    for ($j=0; $j -le 23; $j++) { #hours     #6

        for ($k=0; $k -le 3; $k++) { #15 minutes      #7

            if ($link.InterSiteReplicationSchedule.     #8

                        RawSchedule.psbase.GetValue($i,$j,$k)){$out += "Y"}

            else {$out += "n"}      #9

        }

    }

    $sched += $out    #10

}

$sched     #11

Discussion

I like this script because it gives me more information than the GUI and makes that information easier to access. The following display shows the replication schedule for 15-minute intervals through the whole week.

Image of command output

The numbers across the top row are the hours of the day (24-hour clock). I chose to show when replication is allowed with a capital Y; and when it isn’t, I use with a lowercase n. This makes the replication schedule easier to understand.

It’s time to see how we get to this display. We start by creating a couple of arrays (#1). The first is empty and will hold the schedule data, whereas the second holds the days of the week. If you don’t want to type the days of the week into a script like this, you can generate them this way:

$days = 0..6 | foreach{([System.DayofWeek]$_).ToString()}

Use the range operator to pipe the numbers 0 through 6 into ForEach. The System.DayofWeek enumeration is used to generate the name of the week day.

The next job is to create the top row of the table that holds the hours. Our starting point is the $hours variable, which has 11 spaces. This is padding to allow for the column of day names in the table. The values are simply numbers, so we can use a loop to put each integer value into a four-character field by using the f operator and the .NET string formatting functionality. It’s then appended to the $hours variable. After completed, the $hours variable is appended to the array holding the schedule (#2). We then need to generate a forest context (#3) by going through the usual steps to create it.

The ActiveDirectorySiteLink class has a FindByName() method that uses the forest context and the name of the link (#4). A site link has an InterSite-ReplicationSchedule.RawSchedule property consisting of 672 Boolean entries in a three-dimensional array. Each value represents a period of 15 minutes counted sequentially from 00:00 on Sunday. We can use a set of nested loops to unravel these entries.

The outer loop (#5) counts through the seven days of the week. The processing for each day initializes an empty string as an output variable and adds the day of the week name to it. We pad the name to 11 characters to make everything line up.

It’s much easier to read that way. The middle loop counts through the hours of the day (#6), and the inner loop counts the 15-minute blocks of each hour (#7). The correct value is retrieved from the schedule by using the loop counters as indices (#8). If set to True, the output variable has a Y appended, and if it’s False, an n is appended (#9).

At the end of the loop representing the days, the output variable is appended to our schedule array (#10).

When all 672 values have been processed, we can display the schedule (#11) to produce the display shown previously.

~Richard 

Here is the code for the discount offer today at www.manning.com: scriptw3
Valid for 50% off PowerShell in Practice and Learn Windows PowerShell in a Month of Lunches
Offer valid from April 3, 2013 12:01 AM until April 4 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Why PowerShell Uses Objects

$
0
0

Summary: Microsoft PowerShell MVPs, Don Jones and Jeffery Hicks, show the flexibility and the power of Windows PowerShell objects.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from Learn Windows PowerShell 3 in a Month of Lunches, Second Edition
  By Don Jones and Jeffery Hicks

Photo of book cover

The use of objects in Windows PowerShell can be one of its most confusing elements, but at the same time it’s one of the critical concepts, affecting everything you do in “the shell.” We’ve tried different explanations over the years, and we’ve settled on a couple that work well for distinctly different audiences. This blog, based on Chapter 8 of Learn Windows PowerShell 3 in a Month of Lunches, shows you how Windows PowerShell uses objects.

One of the reasons why Windows PowerShell uses objects to represent data is, well, you have to represent data somehow, right? Windows PowerShell could have stored that data in a format like XML, or perhaps its creators could have decided to use plain-text tables. But they had some specific reasons why they didn’t take that route.

The first reason is that Windows itself is an object-oriented operating system—or at least, most of the software that runs on Windows is object oriented. Choosing to structure data as a set of objects is easy because most of the operating system lends itself to those structures.

Another reason to use objects is because they ultimately make things easier on you and give you more power and flexibility. For the moment, let’s pretend that Windows PowerShell doesn’t produce objects as the output of its commands. Instead, it produces simple text tables, which is what you probably thought it was doing in the first place. When you run a command like Get-Process, you’re getting formatted text as the output: 

PS C:\> get-process

 

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName

-------  ------    -----      ----- -----   ------     -- -----------

     39       5     1876       4340    52    11.33   1920 conhost

     31       4      792       2260    22     0.00   2460 conhost

     29       4      828       2284    41     0.25   3192 conhost

    574      12     1864       3896    43     1.30    316 csrss

    181      13     5892       6348    59     9.14    356 csrss

    306      29    13936      18312   139     4.36   1300 dfsrs

    125      15     2528       6048    37     0.17   1756 dfssvc

   5159    7329    85052      86436   118     1.80   1356 dns

What if you wanted to do something else with this information? Perhaps you want to make a change to all of the processes running conhost. To do this, you’d have to filter the list a bit. In a UNIX or Linux shell, you’d use a command like Grep, telling it, “Look at this text list for me. Keep only those rows where columns 58–64 contain the characters ‘conhost.’ Delete all of the other rows.” The resulting list would contain only those processes you specified:

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName

-------  ------    -----      ----- -----   ------     -- -----------

     39       5     1876       4340    52    11.33   1920 conhost

     31       4      792       2260    22     0.00   2460 conhost

     29       4      828       2284    41     0.25   3192 conhost 

You’d then pipe that text to another command, perhaps telling it to extract the process ID from the list. “Go through this and get the characters from columns 52–56, but drop the first two (header) rows.” The result might be this:

1920

2460

3192

Finally, you’d pipe that text to yet another command, asking it to kill the processes (or whatever else you were trying to do) represented by those ID numbers.

This is, in fact, exactly how UNIX and Linux administrators work. They spend a lot of time learning how to get better at parsing text, using tools like Grep, Awk, and Sed, and becoming proficient in the use of regular expressions. Going through this learning process makes it easier for them to define the text patterns they want their computer to look for. UNIX and Linux folks like programming languages like Perl because those languages contain rich text-parsing and text-manipulation functions. But this text-based approach does present some problems:

  • You can spend more time messing around with text than doing your real job.
  • If the output of a command changes (say, moving the ProcessName column to the start of the table), you have to rewrite all of your commands because they’re all dependent on things like column positions.
  • You have to become proficient in languages and tools that parse text—not because your job involves parsing text, but because parsing text is a means to an end.

The use of objects in Windows PowerShell helps remove all of that text manipulation overhead. Because objects work like tables in memory, you don’t have to tell Windows PowerShell in which text column a piece of information is located. Instead, you tell it the column name, and Windows PowerShell knows exactly where to go to get that data. Regardless of how you arrange the final output on the screen or in a file, the in-memory table is always the same, so you never have to rewrite your commands because a column moved. You spend a lot less time on overhead tasks, and more time focusing on what you want to accomplish.

Here is the code for the discount offer today at www.manning.com: scriptw4
Valid for 50% off Learn Windows PowerShell 3 in a Month of Lunches, Second Edition and Learn Windows IIS in a Month of Lunches
Offer valid from April 4, 2013 12:01 AM until April 5 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


Enabling Multihop Remoting

$
0
0

Summary: Microsoft PowerShell MVPs, Don Jones and Jeffery Hicks, discuss how to enable multihop remoting in Windows PowerShell 3.0.

Microsoft Scripting Guy, Ed Wilson, is here. Tonight is our Windows PowerShell User Group meeting in Charlotte, NC, I will be making a presentation about using Windows 8 to perform remote management, We will also be doing a Lync meeting with the Philadelphia User Group at the same time. Click the following link to join us online from 7:00 – 8:00 P.M. Eastern Standard Time: Charlotte Windows PowerShell User Group meeting.

This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

Today, the excerpt is from Learn Windows PowerShell 3 in a Month of Lunches, Second Edition
  By Don Jones and Jeffery Hicks

Photo of book cover

When you’re remoting into a computer, don’t run Enter-PSSession from that computer unless you fully understand what you’re doing. Let’s say you work on Computer A, which runs Windows 7, and you remote into Server-R2. At the Windows PowerShell prompt, you run this:

[server-r2] PS C:\>enter-pssession server-dc4

Server-R2 is maintaining an open connection to Server-DC4, which can start to create a “remoting chain” that’s hard to keep track of, and which imposes unnecessary overhead on your servers. You may have times when you might have to do this—mainly of instances where a computer like Server-DC4 sits behind a firewall and you can’t access it directly, so you use Server-R2 as a middleman to hop over to Server-DC4. But, as a general rule, try to avoid remote chaining.

Some people refer to “remote chaining” as “the second hop,” and it’s a major Windows PowerShell “gotcha.” We offer a hint: if the Windows PowerShell prompt is displaying a computer name, you’re done. You can’t issue any more remote control commands until you exit that session and “come back” to your computer.

The following drawing depicts the second hop or “multihop” problem: You start on Computer A, and you create a PSSession connection to Computer B. That’s the first hop, and it’ll probably work fine. But, then you try to ask Computer B to create a second hop (or connection) to Computer C—and the operation fails.

Image of setup

The problem is related to the way Windows PowerShell delegates your credentials from Computer A to Computer B. Delegation is the process of enabling Computer B to execute tasks as if it were you, thus ensuring that you can do anything you’d normally have permissions to do—but nothing more. By default, delegation can only traverse one such “hop”—Computer B doesn’t have permission to delegate your credentials to a third computer, Computer C.

In Windows Vista, Windows 7, and Windows 8, you can enable this multihop delegation. Two steps are needed:

  1. On your computer (Computer A in the example), run Enable-WSManCredSSP –Role Client –DelegateComputer x. You’ll replace x with the name of the computer where your credentials may be delegated. You could specify an individual computer name, but you might also use wildcard characters. We don’t recommend using * because that presents some real security concerns, but you might authorize an entire domain, for example: *.company.com.
  2. On the server that you’re connecting to first (Computer B in the example), run Enable-WSManCredSSP –Role Server.

The changes made by the command will be applied to the computers’ local security policies; you could also manually make these changes via a Group Policy Object, which might make more sense in a large domain environment. Managing this via Group Policy is beyond the scope of this blog, but you can find more information in the Help for Enable-WSManCredSSP. Don also authored a Secrets of PowerShell Remoting Guide that covers the policy-related elements in more detail.

Here is the code for the discount offer today at www.manning.com: scriptw4
Valid for 50% off Learn Windows PowerShell 3 in a Month of Lunches, Second Edition and Learn Windows IIS in a Month of Lunches
Offer valid from April 4, 2013 12:01 AM until April 5 midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

How to Remove a Loaded Module

$
0
0

Summary: Bruce Payette talks about how to remove a module that has been loaded into your Windows PowerShell environment.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from Windows PowerShell in Action
  By Bruce Payette

Photo of book cover

Now you know how to avoid creating clutter in your session. But what if it’s too late and you already have too much stuff loaded? You’ll learn how to fix that in this blog, which is based on Chapter 9 of Windows PowerShell in Action, Second Edition.

One of the unique features that Windows PowerShell modules offer is the idea of a composite management application. This is conceptually similar to the idea of a web mashup, which takes an existing service and tweaks it or layers on top of it to achieve a more specific purpose. The notion of management mashups is important as we move into the era of “software+services” (or “clients+clouds,” if you prefer).

Low operating costs make hosted services attractive. The issue is how you manage all these services, especially when you need to delegate administrative responsibilities to slices of the organization.

For example, you might have each department manage its user resources such as mailboxes, customer lists, and web portals. To do this, you need to slice the management interfaces and republish them as a single coherent management experience.

Sounds like magic, doesn’t it? Well, much of it still is, but Windows PowerShell modules can help because they allow you to merge the interfaces of several modules and republish only those parts of the interfaces that need to be exposed.

Removing a loaded module

Modules are loaded by using the Import-Module cmdlet. The syntax for this cmdlet is shown in the following screenshot. As you can see, this cmdlet has a lot of parameters, allowing it to address a wide variety of scenarios.

Image of syntax

Because your Windows PowerShell session can be long running, there may be times when you want to remove a module. You do this with the Remove-Module cmdlet.

Note  Typically, the only people who remove modules are those who are developing the module in question or those who are working in an application environment that’s encapsulating various stages in the process as modules. A typical user rarely needs to remove a module. The Windows PowerShell team almost cut this feature because it turns out to be quite hard to do in a sensible way.

Here is the syntax for Remove-Module:

Image of syntax

When a module is removed, all the modules it loaded as nested modules are also removed from the global module table. This happens even if the module was explicitly loaded at the global level. To illustrate how this works, let’s take a look at how the module tables are organized in the environment. This organization is shown here.

Image of module arrangement

First let’s talk about the global module table. This is the master table that has references to all the modules that have been loaded explicitly or implicitly by another module. Any time a module is loaded, this table is updated. An entry is also made in the environment of the caller.

In the following image, Module1 and Module3 are loaded from the global module environment, so there are references from the top-level module table. Module1 loads Module2, causing a reference to be added the global module table and the private module table for Module1. Module2 loads Module3 as a nested module. Because Module1 has already been loaded from the global environment, no new entry is added to the global module table, but a private reference is added to the module table for Module2. You’ll remove Module3 from the global environment. The updated arrangement of modules is shown here.

Image of module arrangement

Next, you’ll update Module3 and reload it at the top level. The final arrangement of modules is shown here. 

Image of module arrangement

In the final arrangement of modules in the previous image, there are two versions of Module3 loaded into the same session. Although this is extremely complicated, it permits multiple versions of a module to be loaded at the same time in the same session, allowing different modules that depend on different versions of a module to work at the same time. This is a pretty pathological scenario, but the real world isn’t always tidy. Eventually you do have to deal with things you’d rather ignore, so it’s good to know how.

How exported elements are removed

With an understanding of how modules are removed, you also need to know how the imported members are removed. There are two flavors of member removal behavior depending on the type of member you’re removing. Functions, aliases, and variables have one behavior. Cmdlets imported from binary modules have a slightly different behavior.

This is an artifact of the way the members are implemented. Functions, aliases, and variables are data structures that are dynamically allocated and can be replaced. Cmdlets are backed by .NET classes, which can’t be unloaded from a session because .NET doesn’t allow the assemblies that contain these classes to be unloaded. Because of this, the implementation of the cmdlet table depends on hiding or shadowing a command when there’s a name collision when importing a name from a module.

For the other member types, the current definition of the member is replaced. So why does this matter? It doesn’t matter at all until you try to remove a module. If you remove a module that has imported cmdlets, causing existing cmdlets to be shadowed, when the module is removed, the previously shadowed cmdlets become visible again. But when you remove a module that has imported colliding functions, aliases, or variables (because the old definitions were overridden instead of shadowed), the definitions are removed.

Modules are manipulated, managed, and imported by using cmdlets in Windows PowerShell. Unlike many languages, no special syntax is needed. Modules are discovered, in memory and on disk, by using the Get-Module cmdlet. They’re loaded with Import-Module and removed from memory with Remove-Module. These three cmdlets are all you need to know if you only want to use modules on your system. In this blog, we zeroed in on removing a loaded module.

~Bruce

Here is the code for the discount offer today at www.manning.com: scriptw5
Valid for 50% off PowerShell in Action and SharePoint Web Parts in Action
Offer valid from April 5, 2013 12:01 AM until April 6, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Closures in PowerShell

$
0
0

Summary: Bruce Payette talks about Windows PowerShell closures and how to call the GetNewClosure method.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from Windows PowerShell in Action
  By Bruce Payette

Photo of book cover

Windows PowerShell uses dynamic modules to create dynamic closures. A closure in computer science terms (at least as defined in Wikipedia) is “a function that is evaluated in an environment containing one or more bound variables.” A bound variable is, for our purposes, a variable that exists and has a value. The environment in our case is the dynamic module. Finally, the function is simply a script block. In effect, a closure is the inverse of an object. An object is data with methods (functions) attached to that data. A closure is a function with data attached to that method.

The best way to understand what all this means is to look at an example. You’ll use closures to create a set of counter functions. The advantage closures give you over plain functions is that you can change what increment to use after the counter function has been defined. Here’s the basic function: 

function New-Counter ($increment=1)

{

    $count=0;

    {

        $script:count += $increment

        $count

    }.GetNewClosure()

}

There’s nothing you haven’t seen so far—you create a variable and then a script block that increments that variable—except for returning the result of the call to the Get-NewClosure() method. Let’s try this function to see what it does. First, create a counter: 

PS (1) > $c1 = New-Counter

PS (2) > $c1.GetType().FullName

System.Management.Automation.ScriptBlock 

Looking at the type of the object returned, you see that it’s a script block, so you use the & operator to invoke it: 

PS (3) > & $c1

1

PS (4) > & $c1

2

The script block works as you’d expect a counter to work. Each invocation returns the next number in the sequence. Now, create a second counter, but this time set the increment to 2

PS (5) > $c2 = New-Counter 2

Invoke the second counter scriptblock: 

PS (6) > & $c2

2

PS (7) > & $c2

4

PS (8) > & $c2

6

It counts up by 2. But what about the first counter? 

PS (9) > & $c1

3

PS (10) > & $c1

4

The first counter continues to increment by 1, unaffected by the second counter. So the key thing to notice is that each counter instance has its own copies of the $count and $increment variables. When a new closure is created, a new dynamic module is created, and then all the variables in the caller’s scope are copied into this new module.

Here are more examples of working with closures to give you an idea of how flexible the mechanism is. First, you’ll create a new closure by using a param block to set the bound variable $x. This is essentially the same as the previous example, except that you’re using a script block to establish the environment for the closure instead of a named function: 

PS (11) > $c = & {param ($x) {$x+$x}.GetNewClosure()} 3.1415

Now evaluate the newly created closed script block: 

PS (12) > & $c

6.283

This evaluation returns the value of the parameter added to itself. Because closures are implemented by using dynamic modules, you can use mechanisms to manipulate the state of a closure. You can do this by accessing the module object attached to the script block. You’ll use this object to reset the module variable $x by evaluating Set-Variable (sv) in the closure’s module context:

PS (13) > & $c.Module Set-Variable x "Abc"

Now evaluate the script block to verify that it’s been changed: 

PS (14) > & $c

AbcAbc

Next, create another script block closed over the same module as the first one. You can do this by using the NewBoundScriptBlock() method on the module to create a new script block that is attached to the module associated with the original script block:

PS (15) > $c2 = $c.Module.NewBoundScriptBlock({"x ia $x"})

Execute the new script block to verify that it’s using the same $x

PS (16) > & $c2

x ia Abc

Now use $c2.module to update the shared variable: 

PS (17) > & $c2.module sv x 123

PS (18) > & $c2

x ia 123

And verify that it’s also changed for the original closed script block: 

PS (19) > & $c

246

Finally, create a named function from the script block by using the function provider: 

PS (20) > $function:myfunc = $c

And verify that calling the function by name works: 

PS (21) > myfunc

246 

Set the closed variable yet again, but use $c2 to access the module this time: 

PS (22) > & $c2.Module sv x 3

Verify that it’s changed when you call the named function: 

PS (23) > myfunc

6

These examples should give you an idea about how all of these pieces—script blocks, modules, closures, and functions—are related. This is how modules work. When a module is loaded, the exported functions are closures bound to the module object that was created. These closures are assigned to the names for the functions to import. A fairly small set of types and concepts allow you to achieve advanced programming scenarios.

~Bruce

Here is the code for the discount offer today at www.manning.com: scriptw5
Valid for 50% off PowerShell in Action and SharePoint Web Parts in Action
Offer valid from April 5, 2013 12:01 AM until April 6, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Testing License State and Setting the License Key

$
0
0

Summary: Microsoft Windows PowerShell MVP, Richard Siddaway, talks about using Windows PowerShell and WMI to work with operating system licensing.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from PowerShell and WMI
  By Richard Siddaway

Photo of book cover

Product activation for Windows servers may seem to be a pain, but it’s a fact of life. You have to do it for two reasons:

  • To ensure the software is properly licensed and you remain legal
  • To keep the servers working

In this blog, which is based on Chapter 13 of PowerShell and WMI, I will explain how to test license state and set the license key. How can you ensure software is properly licensed in the most efficient manner? My friend James O’Neill answered this in a blog post. In it, he references two WMI classes:

  • SoftwareLicensingProduct
  • SoftwareLicensingService

Note These classes are new in Windows 7 and Windows Server 2008 R2. They’re not available on earlier versions of Windows.

These tips are derived from James’ post. You can test the license status of Windows like this:

Get-WmiObject SoftwareLicensingProduct |

select Name, LicenseStatus

LicenseStatus will return an integer value, where 0 = Unlicensed and 1 = Licensed. A number of results that represent the various ways Windows can be licensed or activated are returned. The important result is the one with a partial product key:

Get-WmiObject SoftwareLicensingProduct |

where {$_.PartialProductKey} |

ft  Name, ApplicationId, LicenseStatus –a

This indicates the licensing situation you’re dealing with. It would be nice, though, if you could get a little bit more information about the licensing state of your system.

Testing license state

Has your IT environment ever been audited? Can you prove that all of your servers are properly activated? This section will help you answer the second question. In addition to being a useful test while you’re building a new server, you can use it to test the setup of your whole estate.

Problem

You need to test the activation and license state of your servers for auditing purposes. Some of the servers are in remote locations and you don’t have the time or resources to physically visit them all.

Solution

You’ve seen that the license status information is available through the SoftwareLicensingProduct class. The following listing shows how you can use that class to generate a meaningful statement about the license status of your server. 

Listing 1: Test license status 

$lstat = DATA {

ConvertFrom-StringData -StringData @'

0 = Unlicensed

1 = Licensed

2 = OOB Grace

3 = OOT Grace

4 = Non-Genuine Grace

5 = Notification

6 = Extended Grace

'@

}

function get-licensestatus {

param (

[parameter(ValueFromPipeline=$true,

   ValueFromPipelineByPropertyName=$true)]

  [string]$computername="$env:COMPUTERNAME"

)

PROCESS {

 Get-WmiObject SoftwareLicensingProduct -ComputerName $computername |

 where {$_.PartialProductKey} |

 select Name, ApplicationId,

 @{N="LicenseStatus"; E={$lstat["$($_.LicenseStatus)"]} }

}}

A hash table, $lstat, is defined at the beginning of the script. You can then call the SoftwareLicensingProduct class against the computer, passed as a parameter to the function. The results are filtered on the PartialproductKey property to ensure you only get the results you need. There are three pieces of data you need:

  • The name of the product
  • The ApplicationId, which is a GUID
  • The decoded license status

The decoding of the license status is managed by the calculated field in the Select-Object statement.

Discussion

The following image shows the results of running the function. The ApplicationId is fixed for versions of Windows. You should get the same result returned on all versions.

Image of command output

The results in the previous image show that you’re still in the grace period after installation of the operating system. You need to set the license key before you can activate the server.

Setting the license key

A Windows license key consists of five groups of five alphanumeric characters. A valid license key is required for each instance of Windows. The key is usually found with the media. Keys are specific to the version of Windows and the source of the media. For instance, you can’t use an MSDN key on a commercial version of Windows.

Problem

The license key needs to be set before you can activate the system. You need to perform this act remotely and ensure that the license key is in the correct format.

Solution

Windows 7 and Windows Server 2008 R2 have a WMI class, SoftwareLicensingService, which you can use to solve this issue. This is shown in the following listing. The license key and computer name are mandatory parameters. This removes the need for default values. The license key pattern is evaluated by using a regular expression and the ValidatePatternmethod. This won’t guarantee that the key is correct, but it will ensure that it’s in the right format.

Listing 2: Set license key 

function set-licensekey {

param (

[parameter(Mandatory=$true)]

[string]

[ValidatePattern("^\S{5}-\S{5}-\S{5}-\S{5}-\S{5}")]

$Productkey,

 

[parameter(Mandatory=$true)]

[string]$computername="$env:COMPUTERNAME"

)

 

 $product = Get-WmiObject -Class SoftwareLicensingService `

-computername $computername

 $product.InstallProductKey($ProductKey)

 $product.RefreshLicenseStatus()

}

You use the SoftwareLicensingService class to create a WMI object. You can use the InstallProductKey method with the license key as an argument. The last line of the function refreshes the license status information.

Discussion

The function is used as follows:

set-licensekey -Productkey "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX"  `

-computername "10.10.54.118"

The "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX" represents the license key. You didn’t really think I’d use my real key? The computer on which you’re installing the key can be designated by IP address as here or by its name.

Configuring a new server is a task that occurs on a regular basis in most organizations. There are a number of steps to be completed after the operating system is installed:

  • Rename the server to something more meaningful.
  • Stop and restart the server as required.
  • Set the IP address and DNS servers.
  • Rename the network connection.
  • Join the server to the domain.
  • Install the license key.
  • Activate the server.
  • Set the power plan.

All of these activities take time. You can use Windows PowerShell functions to perform these tasks remotely so you don’t need to spend time accessing the server directly.

~Richard

Here is the code for the discount offer today at www.manning.com: scriptw6
Valid for 50% off PowerShell and WMI and SharePoint 2010 Workflows in Action
Offer valid from April 6, 2013 12:01 AM until April 7, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Performance Counters and Windows System Assessment Report

$
0
0

Summary: Microsoft PowerShell MVP, Richard Siddaway, talks about using Windows PowerShell and WMI to work with performance counters and Windows assessment.

Microsoft Scripting Guy, Ed Wilson, is here. This week we will not have our usual PowerTip. Instead we have excerpts from seven books from Manning Press. In addition, each blog will have a special code for 50% off the book being excerpted that day. Remember that the code is valid only for the day the excerpt is posted. The coupon code is also valid for a second book from the Manning collection.

This excerpt is from PowerShell and WMI
  By Richard Siddaway 

Photo of book cover

Measuring system performance has traditionally involved looking at the performance counters. These can be accessed through the performance monitor (SYSMON for those who remember earlier versions of Windows), and they can be saved as required. You can also use the Get-Counter cmdlet (which works against remote machines), or you can use the WMI Win32_Perf* classes.

Windows Vista introduced the system assessment report. This rates a number of system hardware components (including memory, CPU, disk, and graphics) to produce an overall rating for the system. The higher the score, the better the system should perform.

I’m often asked about system stability. The number of unscheduled restarts is one way to measure stability. Later versions of Windows calculate a stability index on an hourly basis. This is calculated based on failures and changes, with recent events being more heavily weighted. The maximum possible score is 10.

Performance counters are still required to dig into individual aspects of the system. In this set of tips, we’ll cover performance counters and Windows system assessment reports.

Reading performance counters

If you’ve spent any time investigating system performance, you know that there’s a huge list of available Windows performance counters. The problem of finding the correct counter to use is increased when you consider that applications such as SQL Server, IIS, and Exchange Server add their own raft of counters. WMI enables you to access some, but not all, of the counters.

You can see which counters are available on a specific system like this: 

Get-WmiObject -List Win32_PerfFormattedData* | select name 

Here’s an extract from the results:

Win32_PerfFormattedData_PerfDisk_LogicalDisk

Win32_PerfFormattedData_PerfDisk_PhysicalDisk

Win32_PerfFormattedData_PerfOS_PagingFile

Win32_PerfFormattedData_PerfOS_Processor

Win32_PerfFormattedData_PerfOS_Memory

You should use the Recurse parameter when searching for these classes because they won’t necessarily be added to the default WMI namespace.

Tip  The Win32_PerfFormattedData class is a superclass that will call the other performance formatted data classes. There will be a lot of data to wade through.

There are also related classes that return the raw performance counter data. These classes are difficult to use, because each value has to be processed through a calculation to derive a meaningful result. It’s easier to use the formatted WMI classes or Get-Counter.

Problem

You need to monitor the processor performance of one of your systems. The server has multiple processors (or cores), and you need to display the information for each processor core and the total to ensure that the application is using the processor resources in an optimum manner.

Solution

The following listing presents a function that takes a computer name and a number as parameters. The number determines how many times you’ll sample the processor information.

Listing 1 Accessing performance counters

function get-cpucounter{

[CmdletBinding()]

param (

[parameter(ValueFromPipeline=$true,

   ValueFromPipelineByPropertyName=$true)]

   [string]$computername="$env:COMPUTERNAME",

   [int]$number=1

)

BEGIN{                                                              #1

$source=@"

public class CPUcounter

{

    public  string  Timestamp  {get; set;}

    public  string Name         {get; set;}

    public  ulong PercProcTime  {get; set;}

}

"@

Add-Type -TypeDefinition $source -Language CSharpversion3

}#begin

PROCESS{

1..$number | foreach {

 

$date = (Get-Date).ToString()

 

Get-WmiObject -Class Win32_PerfFormattedData_PerfOS_Processor `

 -ComputerName $computername | foreach {

    $value = New-Object -TypeName CPUCounter -Property @{

       TimeStamp = $date

       Name = $_.Name                                                 #2

       PercProcTime  = $_.PercentProcessorTime

    }

    $value

}

 

Start-Sleep -Seconds 1                                               #3

}

}#process

}

#1 Create class

#2 Create object and set properties

#3 Pause execution

Some inline C# code is used to create a new .NET class to store your results (#1). The class defines three properties—a timestamp, the name of the processor, and the percentage processor time (how much it was used during the measurement period). This is compiled by using Add-Type. Creating a class in this manner enables you to strongly type the properties, which supplies another level of error checking.

The range operator (..) is used to put the required series of numbers into the pipeline. Windows PowerShell will process each value, and for each of them retrieve the processor performance data by using Win32_PerfFormattedData_PerfOS_Processor. One object per processor, plus one for the total, will be returned. You create an object by using your specially created .NET class, populate its properties (#2), and output it. A one-second pause is activated before you start again (#3).

On my development system, I use this code: 

1..10 | foreach {Measure-Command -Expression {Get-WmiObject

 [CA] -Class Win32_PerfFormattedData_PerfOS_Processor  }} 

It show that the Get-WmiObject command takes about 300 milliseconds to retrieve the data. The function could be altered to change the delay, or you could even make it a parameter.

Discussion

The following image displays the results from using this function. The results show that processing is relatively equally distributed across the two cores. I wouldn’t expect to see the values being identical across all processors or cores all of the time.

Image of command output

Tip  In case you’re wondering how I managed to drive processor performance so high, I set a few continuously looping recursive directory listings. They’re a good way to tax the system without spending a lot of money on simulation tools.

Each of the WMI performance counter classes will need to be investigated to determine the properties that you need to record. For example, the class used here also returns information regarding interrupts.

One common scenario that you’ll get is users’ claiming a difference in performance between two systems. You can use the Windows system assessment report to provide a high-level comparison between the hardware of the two systems.

Windows system assessment report

The assessment report was introduced in Windows Vista. It examines a number of hardware components to determine an overall score for the system.

Tip  The overall score is determined by the lowest of the individual component scores. Always examine the full report to determine whether a single component is adversely affecting performance.

Accessing this information for the local computer through the GUI is acceptable, but you also need a way to perform this action remotely.

Problem

You need to create Windows system assessment reports for a number of remote computers. This will enable you to determine which computers should be refreshed and which are worth reusing.

Solution

The following listing utilizes the Win32_WinSat class to solve this issue. A hash table lookup is created to decode the assessment state property. 

Listing 2: System assessment information 

$satstate = DATA {

ConvertFrom-StringData -StringData @'

0 = StateUnknown

1 = Valid

2 = IncoherentWithHardware

3 = NoAssessmentAvailable

4 = Invalid

'@

}

 

function get-systemassessment{

[CmdletBinding()]

param (

[parameter(ValueFromPipeline=$true,

   ValueFromPipelineByPropertyName=$true )]

   [string]$computername="$env:COMPUTERNAME"

)

PROCESS{

 Get-WmiObject -Class Win32_WinSat -ComputerName $computername |

 select CPUScore, D3DScore, DiskScore, GraphicsScore,

 MemoryScore, TimeTaken,

 @{N="AssessmentState"; E={$satstate["$($_.WinSATAssessmentState)"]}},

 @{N="BaseScore"; E={$_.WinSPRLevel}}

 

}#process

}

 The function returns the data from the WMI class and uses Select-Object to output the properties and two calculated fields. One calculated field decodes the assessment state and the other renames the overall score.

Discussion

This report shouldn’t be taken in isolation when looking at system performance. The age of the system and any remaining warranty should also be considered.

Working with event logs, scheduled jobs, and performance indicators is an essential part of the administrator’s role. Windows PowerShell and WMI provide a number of tools to help you in these tasks:

  • Event log discovery and configuration
  • Backup and clearing of event logs
  • Lifecycle management for scheduled jobs, including creation, discovery, and deletion
  • Retrieval of data from performance counters
  • Production of system assessment reports and stability index data

These techniques enable you to gather data for possible forensic investigations, perform out-of-hours tasks through scheduling jobs, and determine how your systems are performing in real time and with a historic perspective. We discussed two of those techniques: performance counters and Windows system assessment reports.

~Richard

Here is the code for the discount offer today at www.manning.com: scriptw6
Valid for 50% off PowerShell and WMI and SharePoint 2010 Workflows in Action
Offer valid from April 6, 2013 12:01 AM until April 7, midnight (EST)

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

Viewing all 3333 articles
Browse latest View live




Latest Images