Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

PowerTip: Find Formatting Information for List Views

$
0
0

Summary: Find Windows PowerShell formatting information for list views.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find information about format data for types for list views?

Hey, Scripting Guy! Answer Use the Get-FormatData cmdlet and pipe the output to Where-Object., for example:

Get-FormatData | ? FormatViewDefinition -Match 'list view'


PowerShell and BitLocker: Part 1

$
0
0

Summary: Guest blogger, Stephane van Gulick, presents a practical hands-on post that shows how to use Windows PowerShell and BitLocker together.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have a new guest blogger, Stephane van Gulick. Stephane was introduced to me by The Scripting Wife, she was browsing the Internet and found his blog. She thought he would be an excellent guest for the Hey, Scripting Guy! Blog, and I totally agreed. This is be a two-part blog series and there is a lot of information to share. Here is where you can find Stephane:

Website: PowerShell District
Twitter : @Stephanevg
Linked-In: Stéphane van Gulick

Howdy everybody,

Today is a perfect day to give you a good introduction to BitLocker. BitLocker is a Microsoft technology that allows you to encrypt a hard drive on a system. In today’s business world, many users are traveling and taking their laptops with them on their journeys. That laptop could potentially carry sensitive corporate data from clients or from their company. Having this data accessed by someone with bad intentions could be a real issue for the business if the laptop is stolen or forgotten somewhere (that can happen, yes!).

This is a risk that cannot be ignored, and it needs to be tackled by the IT department. This risk mitigation is handled by using BitLocker to encrypt the system drive of the computer.

A security mechanism can be implemented that will limit access to the computer with a PIN code that needs to be given each time the device is booted. Without this PIN, the data on the hard drive is encrypted and cannot be accessed.

Thus a thief cannot simply steal the laptop and plug the hard drive into his computer to access confidential information about the company.

The encryption mechanism can be done by using the GUI, but at the Hey, Scripting Guy! Blog, we are more interested in the scripting side of things.

In this post, I will guide you through the scripting steps to automate the encryption of drive C, which is very commonly the system drive. That is the drive that you need to be sure the thief does not have access to.

When you start to script BitLocker encryption, you might think, “Cool. I will use Windows PowerShell cmdlets.”

Well, that is true. But they only became available in systems with Windows PowerShell 4.0 (thus in Windows 8.1 and Windows Server 2012 R2). Luckily, there is WMI to help us!

The second difficulty you might bump in to is the logic. Indeed, to encrypt a volume, you do not only work with the hard drive, but also with the Trusted Platform Module (TPM). And a certain order needs to be respected before any encryption operation can be done.

The TPM is actually the microchip located on your motherboard that will encrypt your hard drive. So before you start to encrypt the hard drive, you need to do some specific TPM operations, which we will discuss in detail a bit later.

We need to take care of things such as taking ownership, clearing the TPM, and launching then encrypting the drive. What? Does it sound confusing? Yes! I agree. But do not worry—we will go through the logic and in a very short moment, and everything will become crystal clear.

The fun stuff

A lot of the following script examples come from a function I wrote called BitLockerSAK. It is a tool written in Windows PowerShell that makes BitLocker tasks easier to automate.

When we wanted to automate encryption prior to Windows PowerShell 4.0, we had to dig in to that good old WMI technology. WMI has indeed been here with us for a while, and it will most certainly be here longer. (The fact that the new “Nano Server” will only be administrable through WMI and Desired State Configuration proves it.)

The two main WMI classes you need to know about are:

  • Win32_TPM
  • Win32_EncryptableVolume

Win32_TPM contains methods and properties that we can use to automate TPM tasks on the local machine.

Win32_EncryptableVolume contains the methods and properties we can use to automate encryption tasks, such as the encryption of the drive and returning the percentage of the encryption. (Don’t worry—we will get there.) But first things first...

Let’s talk about the TPM and everything around it.

TPM ownership prerequisites

The TPM logic that needs to be respected has been simplified to the basics in this Visio flow chart:

Image of flow chart

Before starting to manipulate the encryption mechanisms, we need to handle the TPM. The TPM must meet three conditions before the encryption operations can start:

  • The TPM must be enabled.
  • The TPM must be activated.
  • The TPM must be owned.

If all three conditions are met, we can go further and run the encryption operations on the desired disk.

Now that we have our flow, we can script it when we have answered these questions:

  1. How do we identify if the TPM is enabled, activated, and owned?
  2. How do we remediate each step if the condition is not met?

The most difficult part of our job (in my opinion) is not really the scripting part (the Windows PowerShell scripting language really helps simplify things)—it is more about how and where to find the information we need. For this particular case (most likely, in all cases), you can find more information online and directly in Windows PowerShell by using Get-Member.

We already know that the TPM scripting-related activities are done through the Win32_TPM class, so we start our scripting operations by getting the TPM class:

$Tpm = Get-CIMClass -Namespace ROOT\CIMV2\Security\MicrosoftTpm -Class Win32_Tpm

If we add Get-Member to the TPM variable, we get the following list, where we can easily identify methods that can do the job (highlighted in red):

Image of command output

The other way to get that information is to go directly to the constructor’s webpage, and get a grip on the documentation of the Win32_TPM WMI class. (This is also a good way to get more information about methods that we have found using Get-Member).

We can very easily find the three methods that perfectly fit our tasks:

The Windows PowerShell code for each of them would be easy...

First, we have to load the WMI class into a variable:

$Tpm = Get-wmiobject -Namespace ROOT\CIMV2\Security\MicrosoftTpm -Class Win32_Tpm

Then we call the different methods like this:

#TPM enabled

 $Tpm.IsEnabled().isenabled

#TPM activation

 $Tpm.IsActivated().isactivated

#TPM owned

 $Tpm.IsOwned().isOwned

Each of these methods will return $true if the TPM is enabled, activated, or owned, or $false if not.

And what do we do to remediate these steps to make them answer $true if one of them returns $false? For the Enable and Activate methods, it is a piece of cake. By following the same searching methods as described earlier, we can simply use the following methods:

#TPM enable     $Tpm.Enable()

~ or to disable it:

#TPM disble     $Tpm.Disable() 

The activation is automatically done when the TPM setting is activated in the BIOS. 

Let’s take a minute and sum up all of that in our workflow:

Image of flow chart

There we go! We have the global logic, the methods that we need to verify the logic, and the remediation step methods.

Taking TPM ownership

This looks simple right? But actually, taking the ownership is a little bit less straightforward than what is described in the previous graphic. (That’s why I highlighted it in orange.) Plus, depending on the hardware you are using, you might encounter another behavior. Let me explain...

If you are not the owner of the TPM module, you have to clear the TPM module and then attempt to take ownership to finally have all the cards necessary for the encryption actions to start. But if there is already a TPM owner, you do not have to take the ownership, per se. The TPM ownership operations can succeed here without explicitly taking the ownership.

Let’s go further with the wish to take the ownership. Taking the ownership requires several steps for the process to complete successfully. I have summarized it for you in the following flow chart:

Image of flow chart

Now that we have the basic workflow diagram, let’s try to find how we could automate this. Let’s again look at the methods we have available in Win32_TPM with TPM | Get-Member, and identify the methods for this operation:

Image of command output

To clear the TPM, we can see that a method named Clear is available. Perfect. This exactly what we need!

To clear the TPM, we simply use the following command:

#TPM Clearing TPM owner

$Tpm.Clear()

It is important to know that you need to communicate with your domain controller to clear the TPM. Without a connection, this operation will fail and return a value of 2147942402.

We can also see a method named TakeOwnership that needs the OwnerAuth type as an entry.

When taking the TPM ownership, you actually have the possibility to provide a password so that the owner can be identified. This password is optional, and if used, it must be of the OwnerAuth type. Luckily, we can also see a method called ConverToOwnerAuth in the screenshot. We can use the following code to convert the password to the required format:

#TPM converting password

 $Password = “MyNameIsStephane”

 $TPMPAssword = Tpm.ConverToOwnerAuth($Password).OwnerAuth

And now to take ownership, we call the TakeOwnership() method with the password we previously generated:

Tpm.TakeOwnerShip($TPMPassword)

If you have a return value of 2150105108, it means that the TPM already has an owner. If you want to change the owner, you need to clear it. (Remember, if there is already an owner, you do not have to change the owner, but I recommend that you do it.)

Depending on the hardware that you are using, there could be an extra built-in security layer that would oblige you to have a physical presence at the computer when the TPM ownership is changed. This means, that somebody needs to be physically present during the next boot, and confirm by clicking Allow when asked for the BIOS change confirmation.

As scripters, I bet you see the issue that we could be facing here. However, this is important to know so that we can communicate it efficiently or simply avoid this situation. We can pretty easily identify if we need to have a physical presence at the computer after the TPM ownership has been taken. This can be done by using the method called GetPhysicalPresenceConfirmationStatus(5) from the Win32_tpmclass. The return value is an integer from 0 to 5. The following table explains their meanings.

Value

Meaning

0

Not implemented

1

BIOS only

2

Blocked for the operating system by the BIOS configuration

3

Allowed, and physically present user required

4

Allowed, and physically present user not required

If GetPhysicalPresenceConfirmationStatus(5) returns a 3, you will be obliged to be physically present at the computer to validate the TPM clear change. You might want to avoid taking ownership if you need to have a physical presence at the computer when the TPM ownership is taken.

If this is intended to be a remediation script (that you deploy through ConfigMgr, for example), the end user will be asked to validate the changes at the next boot. He most likely will not understand this screen, will end up frustrated, and will have to call the Help Desk. In most cases, it is better to avoid this type of situation.

Note  Taking ownership is not necessarily needed. If there is already an owner listed for the TPM on the system, you can bypass this option and attempt to encrypt the drive immediately.

Let’s summarize all of this in our flow chart so that we have a global vision of how this works:

Image of flow chart

Another method that can be pretty useful, but is not necessarily mandatory, is GetPhysicalPresenceTransition(). This method helps identify exactly where we are in the process of taking ownership. The returned values are as follows:

Value

Meaning

0

No user action is needed to perform a TPM physical presence operation.

1

To perform a TPM physical presence operation, the user must shut down the computer and then turn it on by using the power button. The user must be physically present at the computer to accept or reject the change when prompted by the BIOS.

2

To perform a TPM physical presence operation, the user must restart the computer by using a warm reboot. The user must be physically present at the computer to accept or reject the change when prompted by the BIOS.

3

The required user action is unknown.

This method tells you when you need to reboot or shut down the computer to confirm the ownership changes.

TPM ownership: The fast way

There is one more method I would like to highlight: SetphysicalPresenceRequest(). This method allows us to combine several of the steps I explained earlier, according to the value we provided. If we want to clear Enable and activate the TPM by generating a random password for us, we call the method like this:

 #TPM converting password

 $Tpm.SetPhysicalPresenceRequest(14)

I would represent it in a graphic like this:

Image of flow chart

Easy, right? You can simply assume that all the prerequisites for taking ownership of the TPM are met (clear + activate + enable).

There is a caveat though. Depending on the hardware you are using, when you perform this operation, you might be asked for a physical presence at the computer to validate the BIOS confirmation message.

TPM ownership: Complete overview

Now, if we put all of this together in one big visual representation, the global BitLocker ownership operations look like this:

Image of flow chart

SetPhysicalPresence(14) surely reduces the number of steps, but it also reduces the control of the process.

To summarize, the graphic shows two ways to take TPM ownership:

  • The controlled and longer path (green + red parts)
  • The more direct way (yellow section)

You will have to choose according to your needs.

Here is where you can find more information about the methods I have discussed:

~Stephane

Thank you, Stephane, for sharing your time and knowledge. That is all for today. Please join us tomorrow when Stephane will finish this exciting blog post.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Determine Letter a String Begins With

$
0
0

Summary: Use Windows PowerShell to determine the letter that a string begins with.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find if a particular string begins with the letter “s”?

Hey, Scripting Guy! Answer Use the StartsWith method from the string, for example:

PS C:\> "string".StartsWith("S")

False

PS C:\> "string".StartsWith("s")

True

Note The StartsWith method is case sensitive. Therefore “S” does not match the example, but “s” does.

PowerShell and BitLocker: Part 2

$
0
0

Summary: Guest blogger, Stephane van Gulick, continues his series about using Windows PowerShell and BitLocker together.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back Stephane van Gulick for the final part of his two-part series. Be sure you read PowerShell and BitLocker: Part 1 first.

Encryption operations

A lot of the following script examples come from a function I wrote called BitLockerSAK. It is a tool written in Windows PowerShell that makes BitLocker tasks easier to automate.

Finally, we arrive at the interesting part: the encryption of the drive. Don’t get me wrong—the Trusted Platform Module (TPM) operations are extremely important in the process of automating the drive encryption. Without these steps, the drive encryption might not even happen. But this is where I had the most fun in the scripting process.

Are you sitting comfortably? You might want to get a refill of coffee before we hit it. Ready? All right...let’s go!

Everything that relates to the proper encryption of the drive and that needs to be automated resides in the WMI (CIM) repository. It lies in the same Root\cimv2\Security\ namespace hierarchy as the Win32_TPM. But this time we will dive into the Win32_EncryptableVolume class.

The Win32_EncryptableVolume class contains an instance for each of the volumes that are present on the computer (for example, hard drives and USB keys).

We can look into it by using the following command, and because we generally want to encrypt the system drive, we will filter on drive C.

Using Get-CimInstance will look like this (the results are shown in green in the following image):

$CIMVolumeC = Get-CimInstance -namespace "Root\cimv2\security\MicrosoftVolumeEncryption" -ClassName "Win32_Encryptablevolume" -

Or we can use Get-WmiObject as follows for retrocompatibility (shown in red in the following image):

$WMIVolumeC= Get-WmiObject -namespace "Root\cimv2\security\MicrosoftVolumeEncryption" -ClassName "Win32_Encryptablevolume" -filter "DriveLetter = 'C:'"

As you can see, these two commands return (almost) the same results:

Image of command output

The only difference is that Get-WMIObject returns the instance and the system properties (they start with the double underscore “__”).

Let’s look at the properties and methods we have access to through the two methods.

Get-CIMInstance returns the following list:

Image of command output

Get-WMIObject returns a bunch more methods—there are so many that we cannot see them all on this screenshot:

Image of command output

The CIM option returns only 18 results when piped to Get-Member:

Image of command output

But good old Get-WMIObject returns 84 results:

Image of command output

Now that we have seen the methods that are available, we can start to work with them.

Key protectors

Prior to launching the encryption of a specific volume, we need to set a key protector. A key protector will protect the volume encryption key, which will protect the volume that has just been encrypted.

We can find all the key protectors that can be set by using the following code:

$EncryptionData = Get-WMIObject -Namespace "Root\cimv2\security\MicrosoftVolumeEncryption" –classname "Win32_EncryptableVolume" -Filter "DriveLetter = 'c:'"

We have a few methods available as shown in the following screenshot:

Image of command output

Those I have worked with the most are:

  • ProtectKeyWithTPM
  • ProtectKeyWithTPMAndPIN
  • ProtectKeyNumericalPassword

Theoretically, we could allow any key protector on any computer. But this is something you want to control in your environment. This can be easily achieved by using a Group Policy Object (GPO).

Each key protector will deliver another encryption experience and it will need some custom scripting to make it work in your environment.

We will not go into the details of each because that would make this post even longer that what it already is. But each of the previous methods are documented on MSDN, so you can find everything that you need there.

Protection key IDs and types

We list the key protectors that are currently on one computer by using GetKeyProtectors and getKeyProtectorType from the Win32_Encryptable class. Here is the code from my BitLockerSAK function:

$BitLocker = Get-WmiObject -Namespace "Root\cimv2\Security\MicrosoftVolumeEncryption" -Class "Win32_EncryptableVolume" -Filter "DriveLetter = '$DriveLetter'"

                $ProtectorIds = $BitLocker.GetKeyProtectors("0").volumekeyprotectorID       

                $return = @()

                foreach ($ProtectorID in $ProtectorIds){

                $KeyProtectorType = $BitLocker.GetKeyProtectorType($ProtectorID).KeyProtectorType

                $keyType = ""

                    switch($KeyProtectorType){

                        "0"{$Keytype = "Unknown or other protector type";break}

                        "1"{$Keytype = "Trusted Platform Module (TPM)";break}

                        "2"{$Keytype = "External key";break}

                        "3"{$Keytype = "Numerical password";break}

                        "4"{$Keytype = "TPM And PIN";break}

                        "5"{$Keytype = "TPM And Startup Key";break}

                        "6"{$Keytype = "TPM And PIN And Startup Key";break}

                        "7"{$Keytype = "Public Key";break}

                        "8"{$Keytype = "Passphrase";break}

                        "9"{$Keytype = "TPM Certificate";break}

                        "10"{$Keytype = "CryptoAPI Next Generation (CNG) Protector";break}

                    }#endSwitch

 $Properties = @{"KeyProtectorID"=$ProtectorID;"KeyProtectorType"=$Keytype}

  $Return += New-Object -TypeName psobject -Property $Properties

                }#EndForeach

Return $Return

This enumerates the all the existing key protectors. Based on their IDs, it will fetch their type, put it in a custom object, and return the information through the variable $return.

You will have something similar to this:

Image of command output

Those I have seen the most are:

  • Numerical Password (return value 3)
  • TPM and PIN (return value 4)

BitLocker Drive Encryption operations

Finally, we come to the part about BitLocker Drive Encryption operations...

There is one main WMI class that hosts all the encryption methods and properties of all of your drives: the Win32_EncryptableVolume. You will find this class in the Root\cimv2\security\MicrosoftVolumeEncryption namespace.

Global protection state

Prior to any encryption operations, you most likely would want to verify which state the drive is in. If it is already 100% encrypted, you will save you some time. We can get that information by using the following  code:

$ProtectionState = Get-WmiObject -Namespace ROOT\CIMV2\Security\Microsoftvolumeencryption -Class Win32_encryptablevolume -Filter "DriveLetter = 'c:'"

                        switch ($ProtectionState.GetProtectionStatus().protectionStatus){

                            ("0"){$return = "Unprotected"}

                            ("1"){$return = "Protected"}

                            ("2"){$return = "Uknowned"}

                            default {$return = "NoReturn"}

}

return $return

We get a value of either 0, which means the drive is unprotected or 1, which means the drive is protected.

Image of command output

This is a first step. If the drive is protected, you can quit the whole script logic because this means that your drive is currently 100% encrypted, and it is ready for the wild, wild west.

Encryption state and encryption percentage

If you want the see the current encryption state of your drive, you can use the following code:

$EncryptionData= Get-WmiObject -Namespace ROOT\CIMV2\Security\Microsoftvolumeencryption -Class Win32_encryptablevolume -Filter "DriveLetter = 'c:'"

                        $protectionState = $EncryptionData.GetConversionStatus()

                        $CurrentEncryptionProgress = $protectionState.EncryptionPercentage

                    switch ($ProtectionState.Conversionstatus){

                    "0" {

                            $Properties = @{'EncryptionState'='FullyDecrypted';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                           }

                    "1" {

                            $Properties = @{'EncryptionState'='FullyEncrypted';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                           }

                    "2" {

                            $Properties = @{'EncryptionState'='EncryptionInProgress';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                            }

                    "3" {

                            $Properties = @{'EncryptionState'='DecryptionInProgress';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                            }

                    "4" {

                            $Properties = @{'EncryptionState'='EncryptionPaused';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                            }

                    "5" {

                            $Properties = @{'EncryptionState'='DecryptionPaused';'CurrentEncryptionProgress'=$CurrentEncryptionProgress}

                            $Return = New-Object psobject -Property $Properties

                            }

                    default {

                                write-verbose "Couldn't retrieve an encryption state."

                                $Properties = @{'EncryptionState'=$false;'CurrentEncryptionProgress'=$false}

                                $Return = New-Object psobject -Property $Properties

                             }

                }

return $return

The current encryption state and the current percentage of encryption of the current drive will be returned. If I launch this part of the code on my computer with elevated rights, the following results are returned:

Image of command output

Note  In the case of decryption, the percentage represents the amount of encrypted space.

The following Visio flow chart helps us see a global overview. It shows the action and the methods that are related to these actions.

Image of flow chart

Encryption

Now that we have identified the current state of the drive, we want to start the encryption. At this state, you should already have a protection key.

If we take a peek in the MSDN documentation, ProtectKeyWithNumericalPassword, we see that the ProtectKeyWithNumericalPassword method has two parameters as input [IN], and one as output [OUT]. But both of the input parameters are optional [Optional]. This means that we can actually call this method without passing any parameters.

Note  The following code will only work if you have set a GPO that allows drive protection by using TPM and PIN.

$pin = 123456 

$ProtectionState = Get-WmiObject -Namespace ROOT\CIMV2\Security\Microsoftvolumeencryption -Class Win32_encryptablevolume -Filter "DriveLetter = '$DriveLetter'"

                write-verbose "Launching drive encryption."

                    $ProtectorKey = $protectionState.ProtectKeyWithTPMAndPIN("ProtectKeyWithTPMAndPIN","",$pin)

                    Start-Sleep -Seconds 3

                    $NumericalPasswordReturn = $protectionState.ProtectKeyWithNumericalPassword()

                    $Return = $protectionState.Encrypt()

                    $returnCode = $return.returnvalue

                    switch ($ReturnCode) {

                        ("0"){$message = "Operation successfully started."}

                        ("2147942487") {$message = "The EncryptionMethod parameter is provided but is not within the known range or does not match the current Group Policy setting."}

                        ("2150694958") {$message = "No encryption key exists for the volume"}

                        ("2150694957") {$message = "The provided encryption method does not match that of the partially or fully encrypted volume."}

                        ("2150694942") {$message = "The volume cannot be encrypted because this computer is configured to be part of a server cluster."}

                        ("2150694956") {$message = "No key protectors of the type Numerical Password are specified. The Group Policy requires a backup of recovery information to Active Directory Domain Services"}

                        default{

                            $message = "An unknown status was returned by the Encryption action."

                            }

                    }

                    $Properties = @{'ReturnCode'=$ReturnCode;'ErrorMessage'=$message}

                    $Return = New-Object psobject -Property $Properties

return $return

As you can see, we use following two methods to encrypt our drive:

  • ProtectKeyWithTPMandPIN
  • ProtectKeyWithNumericalPassword

To protect our volume, we will use the ProtectKeyWithTPMAndPIN method. For this method, there are several parameters that we could pass, but only PIN is a required parameter.

According to the documentation, PIN accepts a user-specified personal identification string as input. This string must consist of a sequence of 4 to 20 digits or, if the "Allow enhanced PINs for startup" Group Policy is enabled, 4 to 20 letters, symbols, spaces, or numbers.

If a 0 is returned (operation successfully started), you can call the previous code and see how the encryption percentage progresses through the time.

Pause the encryption

If at any time, you want to pause the encryption, you can use the following code:

                 $BitLocker = Get-WmiObject -Namespace "Root\cimv2\Security\MicrosoftVolumeEncryption" -Class "Win32_EncryptableVolume" -Filter "DriveLetter = '$DriveLetter'"

                $ReturnCode = $BitLocker.PauseConversion()

                switch ($ReturnCode.ReturnValue){

                    "0"{$Return = "Paused sucessfully.";break}

                    "2150694912"{$Return = "The volume is locked.";Break}

                    default {$Return = "Uknown return code.";break}

                }

return $return

Note  To continue the encryption from where it was paused, simply use previous encryption code to call the encrypt() method again.

The drive encryption logic is summarized in the following Visio flow chart. It shows the actions and the methods that are related to these actions.

Image of flow chart

Decryption

In some cases, you might want or need to decrypt a drive. Again, this can be done through the Win32_EncryptableVolume WMI class with the following code:

$BitLocker = Get-WmiObject -Namespace "Root\cimv2\Security\MicrosoftVolumeEncryption" -Class "Win32_EncryptableVolume" -Filter "DriveLetter = 'c:'"

                $ReturnCode = $BitLocker.Decrypt()

                switch ($ReturnCode.ReturnValue){

                    "0"{$Return = "Uncryption started successfully.";break}

                    "2150694912"{$Return = "The volume is locked.";Break}

                    "2150694953" {$Return = "This volume cannot be decrypted because keys used to automatically unlock data volumes are available.";Break}

                    default {$Return = "Uknown return code.";break}

                }

return $return

If the code is launched, it will start the decryption of drive C.

If you launch the encryption state code again, you will see that the decryption starts and the CurrentEncryptionProgress percentage gets closer to zero each time you launch it.

Image of command output

The methodology must be familiar to most of you by now. If we combine the previous code examples, we can build a logic similar to the following quite easily by using the Decrypt() method.

Image of flow chart

Global encryption logic

I have presented a lot of code, and all of these single tasks need to be done in a specific order. I have summarized all the BitLocker encryption logic in the following Visio flow chart:

Image of flow chart

If the encryption involves a TPM, the TPM also need to be activated; and therefore, some specific TPM actions need to be done. (Those details are discussed in the first post of this series.)

BitLockerSAK

The BitLocker Swiss Army Knife (BitLockerSAK) is a project I started a while ago. It started with the need to automate TPM and BitLocker encryption for one of my clients. This client didn’t have Windows PowerShell 3.0 deployed—thus no BitLocker or CIM cmdlets.

After repetitively executing Get-WMIObject calls, I thought I would simplify the complete process and combine all of this in one unique tool that would have the look and feel of the well-known Manage-bde.exe. I wrote version 1.0 in a weekend and posted it shortly after.

BitLockerSAK makes TPM and drive encryption operations through Windows PowerShell much easier than calling the different WMI methods directly. It has additional logic that will save a lot of time for those who need to script BitLocker or TPM tasks. I have used it in complex encryption scripts and in Configuration Manager configuration items to retrieve non encrypted computers, and remediate the non-compliant ones.

The following tables might look similar, but I have simplified them (especially the WMI Method section) to help you identify how to execute which encryption or TPM task according to which tool you are using.

TPM operations equivalence

The following table lists the most common TPM WMI methods (based on Win32_TPM) and their BitLockerSAK equivalents.

 

WMIMethod

BitLockerSAK

TPM Enabled

.IsEnabled().isenabled

BitLockerSAK -isTPMEnabled

TPM Activated

.IsActivated().isactivated

BitLockerSAK -isTPMActivated

TPM Owned

.IsOwned().Isowned

BitLockerSAK -isTPMOwned

Take TPM OwnerShip

.ClearTpm + .TakeOwnerShip

BitLockerSAK -TakeTPMOwnership

Encryption operations equivalences

The following table lists the most common encryption WMI methods (based on Win32_EncryptableVolume) and their BitLockerSAK equivalents.

 

WMIMethod

BitLockerSAK

Get protection status

.protectionStatus + code to convert return code.

BitLockerSAK -GetProtectionStatus

Get encryption state

.GetConversionStatus() + encryptionpercentage

BitLockerSAK -GetEncryptionState

Get key protector type

.GetKeyProtectorType(“ID”)

BitLockerSAK - GetKeyProtectorTypeAndID

Get key protector ID

.GetKeyProtectors(). volumekeyprotector

BitLockerSAK - GetKeyProtectorTypeAndID

Delete key protector

.DeleteKeyProtectors()

BitLockerSAK –DeleteKeyProtector –protectorID “ID”

Encrypt drive

Specify the protector type +.Encrypt()

BitLockerSAK –encrypt –pin “123456”

Pause encryption

.PauseConversion()

BitLockerSAK -PauseEncryption

Windows Powershell cmdlets in Windows 8.1

Windows 8.1 brought a lot of new features, but one thing that was missing for some time were official Windows PowerShell cmdlets for TPM and encryption management. Luckily, Windows 8.1 came with Windows PowerShell 4.0 and a new set of cmdlets for managing BitLocker operations.

BitLocker cmdlets

The following cmdlets are provided in Windows 8.1 for BitLocker operations:

Image of command output

TPM cmdlets

There are 11 cmdlets for the TPM operations, and they are available in a module called TrustedPlatformModule.

Image of command output

I have updated the equivalence tables with these new cmdlets to help finding the information easier.

BitLocker equivalences

 

WMIMethod

BitLockerSAK

Windows 8.1 cmdlets

Get protection status

.protectionStatus + code to convert return code.

BitLockerSAK

Get-BitLockerVolume

Get encryption state

.GetConversionStatus() + encryptionpercentage

BitLockerSAK

(Get-BitLockerVolume).EncryptionPercentage

Get key protector type

.GetKeyProtectorType(“ID”)

BitLockerSAK

 

(Get-BitLockerVolume).keyprotector

Get key protector ID

.GetKeyProtectors(). volumekeyprotector

BitLockerSAK

 

(Get-BitLockerVolume).keyprotector[0].KeyProtectorID

Delete key protector

.DeleteKeyProtectors()

 

BitLockerSAK –DeleteKeyProtector –protectorID “ID”

Remove-BitLockerKeyprotector

Encrypt drive

Specify the protector type +

.Encrypt()

BitLockerSAK –encrypt –pin “123456”

Enable-BitLocker

Pause encryption

.PauseConversion()

BitLockerSAK -PauseEncryption

Suspend-BitLocker

TPM sheet

 

WMIMethod

BitLockerSAK

Windows 8.1 Cmdlets

TPM Enabled

.IsEnabled().isenabled

BitLockerSAK

Get-TPM

TPM Activated

.IsActivated().isactivated

BitLockerSAK

Get-TPM

TPM Owned

.IsOwned().Isowned

BitLockerSAK

Get-TPM

Take TPM OwnerShip

.ClearTpm + .TakeOwnerShip

BitLockerSAK -TakeTPMOwnership

Initialize-Tpm -AllowClear

Here is my contact information:

Website: PowerShell District
Twitter: @Stephanevg
Linked-In: Stéphane van Gulick

~Stephane

Thank you again, Stephane, for sharing your time and knowledge. This has been an awesome series, and one that is timely and important.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Pad String to Left with PowerShell

$
0
0

Summary: Use Windows PowerShell to pad a string to the left.

Hey, Scripting Guy! Question How can I use a specific Unicode character with Windows PowerShell to pad a string to the left so that the entire
           string is a certain length?

Hey, Scripting Guy! Answer Use the PadLeft method from the String class. Specify the length of the newly created string and the Unicode
           character as the second parameter. Here is an example that uses a triangle as the pad character. The string is
           6 characters long, and the newly created string is 15 characters long. The padding will be 9 triangles.

PS C:\> $a = "string"

PS C:\> $a.PadLeft(15,[char]4)

♦♦♦♦♦♦♦♦♦string

PowerShell Time Sync: Get and Evaluate Synchronization State

$
0
0

Summary: Guest blogger, Rudolf Vesely, shows how to evaluate time synchronization.

Microsoft Scripting Guy, Ed Wilson, is here. Today is day 1 of a 3-part series by Rudolf Vesely. Here, Rudolf tells us about himself:

Photo of Rudolf Vesely

I work as a lead cloud architect in Tieto Productivity Cloud (TPC) at Tieto Corporation. I am a blogger on the Technology Stronghold blog, and I am the author of large series of articles called How to Build Microsoft Private Cloud. I started my career in IT as a developer (assembler, object-oriented programming, C++, C#, .NET, and ASP.NET). However, I moved to operations many years ago. Programming and scripting still remain my hobbies, and this is probably the reason why I am a big propagator of DevOps in my company. I believe Windows PowerShell is a fantastic tool that facilitates the process of joining dev and ops worlds together, and I am very glad that I started learning with Windows PowerShell 1.0.

Contact information:

Introducing the problem

In enterprise environments, you should never ever use standard scripts for monitoring because such scripts are not monitored or highly available. Also you should avoid situations where scripts that are developed by multiple persons run on multiple servers in an uncontrolled way.

In enterprise environments, you need to have a unified platform and use enterprise-grade services for your automation scripts—and Microsoft has an answer for that. The answer is Service Management Automation (SMA) and System Center Operations Manager (SCOM) to monitor SMA.

SMA ensures high availability and SCOM ensures that you are notified about failures. In some cases, you do not have to be notified about individual failures that could be caused, for example, by network transient failures. But you definitely should be notified about repetitive failures, and this is perfect place for SCOM monitoring. SMA requires workflows, so this is another reason why you should learn about Windows PowerShell workflows.

For testing and learning purposes, I have Hyper-V clusters and a few independent Hyper-V hosts at home. Several months ago I found that I had an issue with time synchronization. The issue was not technical because the time was synchronized across my whole domain. But it was very confusing to have the wrong time on all my devices.

I decided that I need a monitoring system that is able to autocorrect, so I wrote a Windows PowerShell module with five cmdlets. This module is able to do these tasks and much more. I choose to use a workflow over an advanced function because I wanted to be able to monitor all servers in parallel and to run monitoring tasks as an SMA runbook.

There are two ways to get my free code:

There are following cmdlets (workflows) in the module:

  • Get-VDateTimeInternetUtc
  • Get-VSystemTimeSynchronization
  • Start-VSystemTimeSynchronization
  • Wait-VSystemTimeSynchronization
  • Test-VSystemTimeSynchronization

Wait-VSystemTimeSynchronization depends on Get-VSystemTimeSynchronization and Start-VSystemTimeSynchronization, and Test-VSystemTimeSynchronization depends on all the other workflows.

Getting started

Let’s begin…

Most of the logic is done via Get-VSystemTimeSynchronization, which gathers information about the current state of time synchronization from local or multiple servers in parallel. I always initialize my functions with:

$ErrorActionPreference = 'Stop'

if ($PSBoundParameters['Debug']) { $DebugPreference = 'Continue' }

Set-PSDebug -Strict

Set-StrictMode -Version Latest

But in the case of workflows, I do not have so many possibilities, so I use:

$ErrorActionPreference = 'Stop'

$ProgressPreference = 'SilentlyContinue'

Now I continue with an inline script. You should avoid using inline scripts if possible because code in an inline script is not a workflow. In my workflow, I decided to use inline script for two reasons:

  • I do a lot of actions that cannot be done in the workflow.
  • I wanted to use InlineScript remoting: InlineScript {} -PSComputerName.

Unfortunately, there is an issue in how inline scripts reuse remote sessions when the PSComputerName parameter is used. I observed that if I run an inline script remotely multiple times in a short period and with different parameters, the old set of parameters is used instead of the new one. In my case, this is not an issue because it is unlikely that someone would change the parameters often.

The inline script starts with PSCustomObject, and during execution, I fill its nulled properties:

$outputItem = [PsCustomObject]@{

    DateTimeUtc                                = $null

    ComputerNameNetBIOS                        = $env:COMPUTERNAME

    ComputerNameFQDN                           = $null

    # For example: @('0.pool.ntp.org', '1.pool.ntp.org', '2.pool.ntp.org', '3.pool.ntp.org')

    ConfiguredNTPServerName                    = $null

    # For example: '0.pool.ntp.org,0x1 1.pool.ntp.org,0x1 2.pool.ntp.org,0x1 3.pool.ntp.org,0x1'

    ConfiguredNTPServerNameRaw                 = $null

    # True if defined by policy

    ConfiguredNTPServerByPolicy                = $null

    SourceName                                 = $null

    SourceNameRaw                              = $null

    LastTimeSynchronizationDateTime            = $null

    LastTimeSynchronizationElapsedSeconds      = $null

    ComparisonNTPServerName                    = $(if ($Using:CompareWithNTPServerName) { $Using:CompareWithNTPServerName } else { $null })

    ComparisonNTPServerTimeDifferenceSeconds   = $null

    # Null when no source is required, True / False when it is required

    StatusRequiredSourceName                   = $null

    # Null when no type is required, True / False when it is required

    StatusRequiredSourceType                   = $null

    # True when date is not unknown, False when it is unknown

    StatusDateTime                             = $null

    # Null when maximum of seconds was not specified, True / False when it was specified

    StatusLastTimeSynchronization              = $null

    # Null when no comparison or when not connection and error should be ignored, True / False when number of seconds was obtained

    StatusComparisonNTPServer                  = $null

    Status                                     = $null

    StatusEvents                               = @()

    Error                                      = $null

    ErrorEvents                                = @()

}

Win32Time service

Then I start the W32Time (Windows Time) service because the w32tm command requires it. As you can see, all parts of the code that can possibly generate an exception are enclosed in Try/Catch block because I do not want to stop the execution of the script, and I want to have information about any exception in the ErrorEvents property of the output object.

try

{

    if ((Get-Service -Name W32Time).Status -ne 'Running')

    {

        Write-Verbose -Message '[Get] [Start service] [Verbose] Start service: W32Time (Windows Time)'

        Start-Service -Name W32Time

    }

}

catch

{

    $outputItem.ErrorEvents += ('[Get] [Start service] [Exception] {0}' -f $_.Exception.Message)

}

Now I continue with gathering data. I need the FQDN (if the server has a suffix), and I need output from w32tm /query /status, which will be processed further.

try

{

    # W32tm

    $w32tmOutput = & 'w32tm' '/query', '/status'

    # FQDN

    $ipGlobalProperties = [System.Net.NetworkInformation.IPGlobalProperties]::GetIPGlobalProperties()

    if ($ipGlobalProperties.DomainName)

    {

        $outputItem.ComputerNameFQDN = '{0}.{1}' -f

            $ipGlobalProperties.HostName, $ipGlobalProperties.DomainName

    }

    else

    {

        $outputItem.ComputerNameFQDN = $null

    }

}

catch

{

    $outputItem.ErrorEvents += ('[Get] [Gather data] [Exception] {0}' -f $_.Exception.Message)

}

The NTP servers

Then I want a list of configured NTP servers. This configuration could be set by Group Policy (would have precedence):

HKLM\SOFTWARE\Policies\Microsoft\W32Time\Parameters

Or I can get it directly in the registry by using the w32tm command:

HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters

It is possible to split the list of servers into array and remove the configuration flag:

try

{

    if (Test-Path -Path HKLM:\SOFTWARE\Policies\Microsoft\W32Time\Parameters -PathType Container)

    {

        $configuredNtpServerNameRegistryPolicy = Get-ItemProperty `

            -Path HKLM:\SOFTWARE\Policies\Microsoft\W32Time\Parameters `

            -Name 'NtpServer' -ErrorAction SilentlyContinue |

            Select-Object -ExpandProperty NtpServer

    }

    else

    {

        $configuredNtpServerNameRegistryPolicy = $null

    }

    if ($configuredNtpServerNameRegistryPolicy)

    {

        $outputItem.ConfiguredNTPServerByPolicy = $true

        # Policy override

        $outputItem.ConfiguredNTPServerNameRaw = $configuredNtpServerNameRegistryPolicy.Trim()

    }

    else

    {

        $outputItem.ConfiguredNTPServerByPolicy = $false

        # Exception if not exists

        $outputItem.ConfiguredNTPServerNameRaw = ((Get-ItemProperty `

            -Path HKLM:\SYSTEM\CurrentControlSet\Services\W32Time\Parameters -Name 'NtpServer').NtpServer).Trim()

    }

     if ($outputItem.ConfiguredNTPServerNameRaw)

    {

        $outputItem.ConfiguredNTPServerName = $outputItem.ConfiguredNTPServerNameRaw.Split(' ') -replace ',0x.*'

    }

}

catch

{

    $outputItem.ErrorEvents += ('[Get] [Configured NTP Server] [Exception] {0}' -f $_.Exception.Message)

}

Let‘s continue and get the current source that the system uses. The source is specified in the output of w32tm /query /status command that was executed at the beginning.

$sourceNameRaw =

    $sourceNameRaw.ToString().Replace('Source:', '').Trim()

$outputItem.SourceNameRaw = $sourceNameRaw

$outputItem.SourceName = $sourceNameRaw -replace ',0x.*'

I implemented a lot of switches that you can enable and specify as the desired state. Then you do not have to spend time going through output from hundreds of servers—you can check the output from those that do not have a Status property with an output object $true.

Check that current source is equal to one of the NTP servers specified in the –RequiredSourceName parameter:

if ($Using:RequiredSourceName -contains $outputItem.SourceName)

{

    $outputItem.StatusRequiredSourceName = $true

}

else

{

    $outputItem.StatusRequiredSourceName = $false

    $outputItem.ErrorEvents += ('[Get] [Source name] [Error] Current: {0}; Required: {1}' -f

        $outputItem.SourceName, ($Using:RequiredSourceName -join ', '))

}

Checking the source

First check that the source is not the internal clock:

if (($Using:RequiredSourceTypeConfiguredInRegistry -or $Using:RequiredSourceTypeNotLocal) -and

    ($outputItem.SourceNameRaw  -eq 'Local CMOS Clock' -or

    $outputItem.SourceNameRaw  -eq 'Free-running System Clock'))

{

    $outputItem.StatusRequiredSourceType = $false

    $outputItem.ErrorEvents += ('[Get] [Source type] [Error] Time synchronization source: Local')

}

Check that the current source is not the Hyper-V service:

if (($Using:RequiredSourceTypeConfiguredInRegistry -or $Using:RequiredSourceTypeNotByHost) -and

    $outputItem.SourceNameRaw  -eq 'VM IC Time Synchronization Provider')

{

    $outputItem.StatusRequiredSourceType = $false

    $outputItem.ErrorEvents += ('[Get] [Source type] [Error] Time synchronization source: Hyper-V')

}

Check that the current source is equal to one of the configured NTP servers. For example, when the source should be an NTP server, but time synchronization does not work, the current source is “Local CMOS Clock.”

if ($Using:RequiredSourceTypeConfiguredInRegistry -and

    $outputItem.ConfiguredNTPServerName -notcontains $outputItem.SourceName)

{

    $outputItem.StatusRequiredSourceType = $false

    $outputItem.ErrorEvents += ('[Get] [Source type] [Error] Not equal to one of the NTP servers that are define in Windows Registry')

}

Now it is time to check when the last synchronization happened. Get the data from w32tm /query /status that was executed at the beginning:

$lastTimeSynchronizationDateTimeRaw = $w32tmOutput |

    Select-String -Pattern '^Last Successful Sync Time:'

$outputItem.StatusDateTime = $false

if ($lastTimeSynchronizationDateTimeRaw)

{

    $lastTimeSynchronizationDateTimeRaw =

        $lastTimeSynchronizationDateTimeRaw.ToString().Replace('Last Successful Sync Time:', '').Trim()

Calculate time since last sync

Now calculate and evaluate the time duration since the last synchronization. If the last time sync is unknown or if it is processed longer ago than is specified, save a record about it. The record will be included in the output object.

if ($lastTimeSynchronizationDateTimeRaw -eq 'unspecified')

{

    $outputItem.ErrorEvents += '[Last time synchronization] [Error] Date and time: Unknown'

}

else

{

    $outputItem.LastTimeSynchronizationDateTime = [DateTime]$lastTimeSynchronizationDateTimeRaw

    $outputItem.LastTimeSynchronizationElapsedSeconds = [int]((Get-Date) - $outputItem.LastTimeSynchronizationDateTime).TotalSeconds

    $outputItem.StatusDateTime = $true

    <#

    Last time synchronization: Test: Maximum number of seconds

    #>

    if ($Using:LastTimeSynchronizationMaximumNumberOfSeconds -gt 0)

    {

        if ($outputItem.LastTimeSynchronizationElapsedSeconds -eq $null -or

            $outputItem.LastTimeSynchronizationElapsedSeconds -lt 0 -or

            $outputItem.LastTimeSynchronizationElapsedSeconds -gt $Using:LastTimeSynchronizationMaximumNumberOfSeconds)

        {

            $outputItem.StatusLastTimeSynchronization = $false

            $outputItem.ErrorEvents += ('[Get] [Last time synchronization] [Error] Elapsed: {0} seconds; Defined maximum: {1} seconds' -f

                $outputItem.LastTimeSynchronizationElapsedSeconds, $Using:LastTimeSynchronizationMaximumNumberOfSeconds)

        }

        else

        {

            $outputItem.StatusLastTimeSynchronization = $true

        }

    }

}

One of the additional features is to compare time between a target computer (local or a remote server) and the specified NTP server. There is no reason to compare time between a computer and an NTP server that is used for regular time synchronization, but it is possible to use this feature to compare time, for example, against a public NTP server (a firewall opening for UDP 123 is required).

At first, I use w32tm to get the exact time difference between the target server and the specified NTP server:

$w32tmOutput = & 'w32tm' '/stripchart',

    ('/computer:{0}' -f $Using:CompareWithNTPServerName),

    '/dataonly', '/samples:1' |

    Select-Object -Last 1

Now calculate and evaluate the time duration like in the previous case:

if ($outputItem.ComparisonNTPServerTimeDifferenceSeconds -eq $null -or

    $outputItem.ComparisonNTPServerTimeDifferenceSeconds -lt ($Using:CompareWithNTPServerMaximumTimeDifferenceSeconds * -1) -or

    $outputItem.ComparisonNTPServerTimeDifferenceSeconds -gt $Using:CompareWithNTPServerMaximumTimeDifferenceSeconds)

{

    $outputItem.ErrorEvents += ('[Get] [Compare with NTP] [Error] Elapsed: {0} seconds; Defined maximum: {1} seconds' -f

        $outputItem.ComparisonNTPServerTimeDifferenceSeconds, $Using:CompareWithNTPServerMaximumTimeDifferenceSeconds)

}

else

{

    $outputItem.StatusComparisonNTPServer = $true

}

Evaluate overall status

And that is all from the inline script. Now it is time to evaluate overall status and return the object. Overall status is $false when a single error or exception occurred, when a single value was not in the specified range, or when the current state is not the specified desired state:

if ($outputItem.ErrorEvents)

{

    Write-Warning -Message ('[Get] Results: False: {0}' -f ($outputItem.ErrorEvents -join "; "))

    $outputItem.Status  = $false

    $outputItem.Error   = $true

}

else

{

    $outputItem.Status  = $true

    $outputItem.Error   = $false

}

$outputItem.DateTimeUtc = (Get-Date).ToUniversalTime()

$outputItem

The following image shows the output from the evaluation:

Image of command output

That is all for today. Tomorrow, I will focus more on workflows, and I will explain how to do error handling in parallel operations.

~Rudolf

Thanks, Rudolf. This is great stuff. I am really looking forward to Part 2.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Find Time Service Status

$
0
0

Summary: Use Windows PowerShell to find the status of the time service.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find the status of the time service on my local computer?

Hey, Scripting Guy! Answer If all you are looking for is the status, use the W32tm command with the /query and /status switches:

w32tm /query /status

PowerShell Time Sync: Error Handling in Parallel in Workflows

$
0
0

Summary: Guest blogger, Rudolf Vesely talks about error handling in workflows and repair actions.

Microsoft Scripting Guy, Ed Wilson, is here. Today is Part 2 of a 3-part series written by guest blogger Rudolf Vesely. Read yesterday’s post to catch up and to learn more about Rudolf:PowerShell Time Sync: Get and Evaluate Synchronization State. Take it away, Rudolf...

Yesterday, I described the Time Sync module and I explained how the main inline script works. Those of you who checked my source code may realize that I used special error handling for the whole inline script (I do not mean error handling inside of the inline script).

Note  Setting the operations discussed in today’s post require elevated rights; otherwise, you will get an Access is denied error message.

Parallel operations

When you run parallel operations, for example, using foreach –parallel {}, or in my case, using {} –PSComputerName $multipleServers, it is not possible to simply enclose whole inline script in a Try/Catch block because multiple exceptions may happen at the same time. You need to catch all the exceptions in a specified variable (-PSError $myVar), and then process them.

If I run Get-VSystemTimeSynchronization against a local server or against a single remote server, I do not do the operations in parallel; and therefore, I also need to use a Try/Catch block.

Later, I want to process all ErrorRecord objects, and I do not care if the exception happened in standard or in parallel execution. This is the reason I save all errors to the same variable.

try

{

    InlineScript

    {

        # Some code

    } -PSComputerName $ComputerName `

        -PSPersist:$false `

        -PSError $inlineScriptErrorParallelItems

}

catch [System.Management.Automation.Remoting.PSRemotingTransportException]

{

    $inlineScriptErrorItems = $_

}

if ($inlineScriptErrorParallelItems) { $inlineScriptErrorItems = $inlineScriptErrorParallelItems }

Now I have all the exceptions in the same variable and I can process it. Objects from parallel execution are enclosed in another object; therefore, I need to take them out of the Exception property:

foreach ($inlineScriptErrorFullItem in $inlineScriptErrorItems)

{

    if ($inlineScriptErrorFullItem.PSObject.Properties.Name -eq 'Exception')

    {

        $inlineScriptErrorItem = $inlineScriptErrorFullItem.Exception

    }

    else

    {

        $inlineScriptErrorItem = $inlineScriptErrorFullItem

    }

Then I continue with error handling. All objects are of the same type; and therefore, they have the same properties.

Ignoring certain errors

I implemented a feature that allows you to ignore wrong computer names or computers that are not accessible. If you use it, you can, for example, specify to be notified when the time synchronization does not work (the Property status of the output object is $false), but you do not care when the computer is inaccessible (for example, because of network failure or during a service window).

    # Ignore defined errors

    if ($IgnoreError -contains 'WrongComputerName' -and

        ($inlineScriptErrorItem.ErrorRecord.CategoryInfo | Select-Object -First 1 -ExpandProperty Category) -eq 'ResourceUnavailable' -and

        $inlineScriptErrorItem.TransportMessage -like 'The network path was not found.*')

    {

        Write-Warning -Message 'Device does not exists (not in DNS).'

    }

    elseif ($IgnoreError -contains 'DeviceIsNotAccessible' -and

        ($inlineScriptErrorItem.ErrorRecord.CategoryInfo | Select-Object -First 1 -ExpandProperty Category) -eq 'ResourceUnavailable' -and

        $inlineScriptErrorItem.TransportMessage -like '*Verify that the specified computer name is valid, that the computer is accessible over the network*')

    {

        Write-Warning -Message 'Device exists (defined in DNS) but it is not reachable (not running, FW issue, etc.).'

    }

Logging errors

I wrote the main inline script so that any exceptions are captured and logged in the ErrorEvents property of the output object. But when there is a mistake, I want to terminate the script immediately (for example, when an unhandled exception occurs). I wrote the workflow with $ErrorActionPreference = ‘Stop’ (all exceptions are terminating), so a simple Write-Error is enough to terminate the script.

    else

    {

        # Terminating error

        Write-Error -Exception $inlineScriptErrorItem.ErrorRecord.Exception

    }

}

That is all for the most important workflow, Get-VSystemTimeSynchronization. Let‘s continue with Start-VSystemTimeSynchronization.

Start-VSystemTimeSynchronization is a simple workflow to invoke time synchronization. As usual, I start with ErrorActionPreference and I do not want to see the progress bars:

$ErrorActionPreference = 'Stop'

$ProgressPreference = 'SilentlyContinue' 

Start the Windows Time service

I need to start the W32Time (Windows Time) service to use the w32tm command:

if ((Get-Service -Name W32Time).Status -ne 'Running')

{

    Write-Verbose -Message '[Start] [Start service] [Verbose] Start service: W32Time (Windows Time)'

    Start-Service -Name W32Time

}

Invoke time sync

Then I can invoke time synchronization. The most common command, w32tm /resync /force, triggers immediate time synchronization, and command w32tm /resync /rediscover rediscovers sources. Source rediscovery is important if the current source does not match the defined NTP server. This can happen, for example, when a computer cannot access the NTP server, and therefore, it uses only the internal clock. In this case, the current source is the internal clock, and that is not the desired state.

if ($Rediscover)

{

    $w32tmOutput = InlineScript { & 'w32tm' '/resync', '/rediscover' }

}

elseif ($Force)

{

    $w32tmOutput = InlineScript { & 'w32tm' '/resync', '/force' }

}

else

{

    $w32tmOutput = InlineScript { & 'w32tm' '/resync' }

}

At the end, it is handy to check if the command ran successfully and return a Boolean value that informs about it:

if ($w32tmOutput | Select-String -Pattern 'The command completed successfully.')

{

    Write-Debug -Message ('[Start] [Synchronization] [Debug] Command completed successfully.')

    $true

}

else

{

    Write-Warning -Message ('[Start] [Synchronization] [Error] Command did not completed successfully.')

    $false

}

Wait for success

Another workflow in the module is Wait-VSystemTimeSynchronization. This workflow basically uses the Get-VSystemTimeSynchronization and Start-VSystemTimeSynchronization workflows. It starts the time sync if needed and waits for success.

It is possible to limit the number of attempts to fix the time synchronization. 0 (default) means infinite attempts,  1 means only one attempt, and so on.

$repetitionCountCurrent = 1

$status = $false

while (!$status -and ($RepetitionCount -eq 0 -or $repetitionCountCurrent -le $RepetitionCount))

{

    if ($RepetitionCount -gt 0)

    {

        Write-Verbose -Message ('[Wait] Repetition: {0} / {1}' -f

            $repetitionCountCurrent, $RepetitionCount)

        $repetitionCountCurrent++

    }

    else

    {

        Write-Verbose -Message ('[Wait] Repetition')

    }

The current state is obtained from all the servers in parallel:

$outputItems = Get-VSystemTimeSynchronization `

            -ComputerName $ComputerName `

            -RequiredSourceName $RequiredSourceName `

            -RequiredSourceTypeConfiguredInRegistry $RequiredSourceTypeConfiguredInRegistry `

            -RequiredSourceTypeNotLocal $RequiredSourceTypeNotLocal `

            -RequiredSourceTypeNotByHost $RequiredSourceTypeNotByHost `

            -LastTimeSynchronizationMaximumNumberOfSeconds $LastTimeSynchronizationMaximumNumberOfSeconds `

            -CompareWithNTPServerName $CompareWithNTPServerName `

            -CompareWithNTPServerMaximumTimeDifferenceSeconds $CompareWithNTPServerMaximumTimeDifferenceSeconds `

            -IgnoreError $IgnoreError `

            -Verbose:$false `

            -PSPersist:$false

Produce a report

Now let’s count the number of successful and unsuccessful results (custom objects from all servers). All unsuccessful results are divided into groups. One group is servers with the wrong source and another group is servers with other issues.

$outputOKItems = $outputItems |

    Where-Object -FilterScript { $_.Status -eq $true }

$outputWrongSourceItems = $outputItems |

    Where-Object -FilterScript { $_.StatusSourceName -eq $false -or $_.StatusSourceType -eq $false }

$outputOtherErrorItems = $outputItems |

    Where-Object -FilterScript { $_.Status -ne $true -and

    ($_.StatusSourceName -ne $false -or $_.StatusSourceType -ne $false) }

Now let’s do the corrective actions. If the server is in group with the wrong source, the corrective action is to rediscover the source. Other servers are only forced to run immediate synchronization.

if ($CorrectiveActions -and ($outputWrongSourceItems -or $outputOtherErrorItems))

{

    if ($outputWrongSourceItems)

    {

        Write-Verbose -Message ('[Wait] Correction action: Rediscover ({0}): {1}' -f

            @($outputWrongSourceItem).Count, ($outputWrongSourceItem.ComputerNameNetBIOS -join ', '))

        $null = Start-VSystemTimeSynchronization `

            -Rediscover:$true `

            -PSComputerName $outputWrongSourceItem.ComputerNameNetBIOS

    }

    if ($outputOtherErrorItems)

    {

        Write-Verbose -Message ('[Wait] Correction action: Immediate synchronization ({0}): {1}' -f

            @($outputOtherErrorItems).Count, ($outputOtherErrorItems.ComputerNameNetBIOS -join ', '))

        $null = Start-VSystemTimeSynchronization `

            -Force:$true `

            -PSComputerName $outputWrongSourceItem.ComputerNameNetBIOS

    }

}

else

{

    $status = $true

}

Now we wait a specified number of seconds before another attempt. If the current attempt is the last one, there is no reason to wait anymore.

if ($RepetitionDelaySeconds -gt 0 -and

    ($RepetitionCount -eq 0 -or $repetitionCountCurrent -le $RepetitionCount))

{

    Write-Debug -Message ('[Wait] [Debug] Delay: {0} seconds' -f

            $RepetitionDelaySeconds)

    Start-Sleep -Seconds $RepetitionDelaySeconds

}

At the end, all objects from Get-VSystemTimeSynchronization are returned, regardless of the corrective action results:

if ($outputItems)

{

    if (@($outputItems).Count -eq @($computerNameItems).Count)

    {

        Write-Verbose -Message ('[Wait] [Verbose] Finish: {0} devices' -f

            @($outputItems).Count)

    }

    else

    {

        Write-Warning -Message ('[Wait] [Error] Not all devices were queried: {0} / {1}' -f

                @($outputItems).Count, @($computerNameItems).Count)

    }

    $outputItems

}

else

{

    Write-Warning -Message ('[Wait] [Error] No data')

}

The output from the commands is shown in the following image:

Image of command output

That is all for today. In the next and last post, I will explain the last two workflows from the Time Sync module.

~Rudolf

Thank you, Rudolf. Please join us tomorrow for the final post in this series. 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


PowerTip: Force Time Resync with PowerShell

$
0
0

Summary: Use Windows PowerShell to force a time resynchronization.

Hey, Scripting Guy! Question How can I use Windows PowerShell to force a time resynchronization?

Hey, Scripting Guy! Answer Use the W32tm /resync /force command.

           Note This command requires elevated rights.

PowerShell Time Sync: Orchestrate Monitoring of Remote Servers

$
0
0

Summary: Guest blogger, Rudolf Vesely discusses how to monitor remote servers.

Microsoft Scripting Guy, Ed Wilson, is here. Today is Part 3 of a 3-part series by guest blogger, Rudolf Vesely. To read the first 2 posts in this series, please see:

In previous posts, I described my Time Sync module. I explained how to handle exceptions in parallel operations and how the Start-VSystemTimeSynchronization and Wait-VSystemTimeSynchronization workflows work. Those who have followed along may realize that there are two more workflows in the module.

Get current time from the Internet     

The first workflow is very simple. It is able to get the current time from the Internet using HTTP protocol. I spent some time searching, and I found that it is possible to get the number of microseconds since 1970 from nist.time.gov:

$webRequest = Invoke-WebRequest -Uri 'http://nist.time.gov/actualtime.cgi?lzbc=siqm9b'

Please realize that you cannot use Invoke-WebRequest directly in SMA in Microsoft Azure or the Server Core installation option. The reason is that these servers do not have the Internet Explorer engine; and therefore, you need to simplify the web request and use basic parsing:

$webRequest = Invoke-WebRequest -UseBasicParsing -Uri 'http://nist.time.gov/actualtime.cgi?lzbc=siqm9b'

The easiest option for converting the returned value is to use a regular expression:

$milliseconds = [int64](($webRequest.Content -replace '.*time="|" delay=".*') / 1000)

Now I instantiate the object by using one of the constructors and then add milliseconds. The problem is that it cannot be done in a workflow, so I am forced to do it in an inline script. As I wrote in the first post, you should always try to avoid inline scripts, but now it is needed.

InlineScript

{

    (New-Object -TypeName DateTime -ArgumentList (1970, 1, 1)).AddMilliseconds($Using:milliseconds)

}

Let’s describe the last Test-VSystemTimeSynchronization workflow. This is the final workflow and it uses all the other workflows from the module. The workflow orchestrates all the testing, reporting, and monitoring actions from a single point (SMA server, management server), and it checks or corrects time synchronization on a large number of remote servers. Technically, it is not a problem to use Test-VSystemTimeSynchronization against a local server, but there is no reason to do that.

First, let me describe one additional feature that I implemented and that uses a previous workflow to get the current time using HTTP.

In Get-VSystemTimeSynchronization, you can compare time between a target server and any other specified NTP server. This could be handy as a second test because you can compare time, for example, against a public NTP server while you use an internal NTP server for regular time synchronization. If you run Get-VSystemTimeSynchronization against a remote server, the comparison is done between two remote servers (target and NTP) via NTP protocol (UDP 123). The problem is that in most production environments, servers cannot access the Internet.

For these reasons, I implemented another check that is done via Test-VSystemTimeSynchronization. the Test-VSystemTimeSynchronization workflow compares time that is obtained from Get-VSystemTimeSynchronization and the current time on the nist.time.gov website. If you run tests using Test-VSystemTimeSynchronization from a management server (with HTTP Internet access) against remote servers, the remote serves do not have to have HTTP Internet access. Only the management server has to be able to access Internet.

Of course, it is possible to access the Internet through a proxy server (this is very common for secured enterprise environments). The only drawback to this approach is that the check is inaccurate. That means you can choose to be notified when the time difference is larger than 10 minutes. However, you cannot measure the real time difference between the nist.time.gov website and a remote server.

Test-VSystemTimeSynchronization attempts to get time from the web, and you can specify in the –IgnoreError parameter that you do not want to be informed about errors when the website is inaccessible.

if ($CompareWithWeb)

{

    try

    {

        $dateTimeInternetUtc          = Get-VDateTimeInternetUtc -Verbose:$false

        $dateTimeInternetUtcObtained  = (Get-Date).ToUniversalTime()

    }

    catch

    {

        if ($IgnoreError -contains 'CompareWithWebNoConnection')

        {

            Write-Warning -Message '[Test] [Compare with web] [Error] Cannot obtain date and time from the internet'

        }

        else

        {

            Write-Error -ErrorRecord $_

        }

    }

}

You have probably noticed that I also saved the current time on the server. I use it later to calculate the time duration between the moments when you get data from the web and when you do the comparison. This improves the precision of the check.

Coordinate time sync

Now let‘s run Wait-VSystemTimeSynchronization (described in the previous post) and get data from it. I will again save the time when all the data is obtained to increase the precision of the time comparison between the servers and nist.time.gov website.

$outputItems = Wait-VSystemTimeSynchronization `

    -ComputerName $ComputerName `

    -RequiredSourceName $RequiredSourceName `

    -RequiredSourceTypeConfiguredInRegistry $RequiredSourceTypeConfiguredInRegistry `

    -RequiredSourceTypeNotLocal $RequiredSourceTypeNotLocal `

    -RequiredSourceTypeNotByHost $RequiredSourceTypeNotByHost `

    -LastTimeSynchronizationMaximumNumberOfSeconds $LastTimeSynchronizationMaximumNumberOfSeconds `

    -CompareWithNTPServerName $CompareWithNTPServerName `

    -CompareWithNTPServerMaximumTimeDifferenceSeconds $CompareWithNTPServerMaximumTimeDifferenceSeconds `

    -CorrectiveActions $CorrectiveActions `

    -RepetitionCount $RepetitionCount `

    -RepetitionDelaySeconds $RepetitionDelaySeconds `

    -IgnoreError $IgnoreError

$outputItemsObtainedDateTimeUtc = (Get-Date).ToUniversalTime()

Now it is possible to process all the custom objects that we obtained from Wait-VSystemTimeSynchronization and that were originally generated in Get-VSystemTimeSynchronization. It is possible to process them in parallel:

foreach -parallel ($outputItem in $outputItems)

{

The first thing is to get the current ErrorEvents and StatusEvents properties of the custom objects from Get-VSystemTimeSynchronization. These properties are used as a log of warnings and errors. That means these properties can contain, for example, information about failed synchronization or that the current source is not equal to the specified one in the desired state.

$errorItems  = $outputItem.ErrorEvents

$statusItems = $outputItem.StatusEvents

The next step is to compare the time between servers and the nist.time.gov website (if it was specified by parameters). The first operation is a correction. However, even this correction does not ensure a precise comparison because I did not save when the particular custom objects were obtained from Wait-VSystemTimeSynchronization. I only saved the time when the process finished.

I believe that improving this correction is not important because this check with the nist.time.gov website cannot be technically precise, and it should be used only for simple verification when the difference between the time on the nist.time.gov website and the server time is not too large (for example, more than 10 minutes).

If you want to improve this part of code, then go ahead.

$comparisonWebTimeDifferenceSeconds = $null

$statusComparisonWeb = $null

if ($dateTimeInternetUtc)

{

    # Correct time from the internet that was obtained a couple seconds ago

    $dateTimeInternetUtcWithCorrection = $dateTimeInternetUtc + ($outputItemsObtainedDateTimeUtc - $dateTimeInternetUtcObtained)

    $comparisonWebTimeDifferenceSeconds = [int]($dateTimeInternetUtcWithCorrection - $outputItem.DateTimeUtc).TotalSeconds

    if ($comparisonWebTimeDifferenceSeconds -eq $null -or

        $comparisonWebTimeDifferenceSeconds -lt ($CompareWithWebMaximumTimeDifferenceSeconds * -1) -or

        $comparisonWebTimeDifferenceSeconds -gt $CompareWithWebMaximumTimeDifferenceSeconds)

    {

        $statusComparisonWeb = $false

        $errorItems += ('[Test] [Compare with web] [Error] Elapsed: {0} seconds; Defined maximum: {1} seconds' -f

            $comparisonWebTimeDifferenceSeconds, $CompareWithWebMaximumTimeDifferenceSeconds)

    }

    else

    {

        $statusComparisonWeb = $true

    }

}

else

{

    if ($CompareWithWeb)

    {

        $statusItems += '[Test] [Compare with web] [Error] Cannot obtain date and time from the internet'

    }

}

Finally, I modify the object from Wait-VSystemTimeSynchronization that was generated in Get-VSystemTimeSynchronization, and I add properties related to check the time difference between the server and the nist.time.gov website. If I did the same modification in a standard function (not in a workflow), I would have used following projection:

# Example:

$someObjects |

    Select-Object -Property * ,

    @{ Expression = { (55 * 12345) }; Label = 'SomeProperty' }

    @{ Expression = { $var }; Label = 'SomethingElse' }

The problem is that workflows do not support this way, so I decided to create a new custom object. As you can see in last rows, I do a new evaluation of the Status property because if the nist.time.gov website time comparison fails, I have to change the status from $true to $false.

[PsCustomObject]@{

    DateTimeUtc = $outputItem.DateTimeUtc

    DateTimeInternetUtc = $dateTimeInternetUtcWithCorrection

    ComputerNameBasic = $outputItem.ComputerNameBasic

    ComputerNameNetBIOS = $outputItem.ComputerNameNetBIOS

    ComputerNameFQDN = $outputItem.ComputerNameFQDN

    ConfiguredNTPServerName = $outputItem.ConfiguredNTPServerName

    ConfiguredNTPServerNameRaw = $outputItem.ConfiguredNTPServerNameRaw

    ConfiguredNTPServerByPolicy = $outputItem.ConfiguredNTPServerByPolicy

    SourceName = $outputItem.SourceName

    SourceNameRaw = $outputItem.SourceNameRaw

    LastTimeSynchronizationDateTime = $outputItem.LastTimeSynchronizationDateTime

    LastTimeSynchronizationElapsedSeconds = $outputItem.LastTimeSynchronizationElapsedSeconds

    ComparisonNTPServerName = $outputItem.ComparisonNTPServerName

    ComparisonNTPServerTimeDifferenceSeconds = $outputItem.ComparisonNTPServerTimeDifferenceSeconds

    ComparisonWebTimeDifferenceSeconds = $comparisonWebTimeDifferenceSeconds

    StatusRequiredSourceName = $outputItem.StatusRequiredSourceName

    StatusRequiredSourceType = $outputItem.StatusRequiredSourceType

    StatusDateTime = $outputItem.StatusDateTime

    StatusLastTimeSynchronization = $outputItem.StatusLastTimeSynchronization

    StatusComparisonNTPServer = $outputItem.StatusComparisonNTPServer

    StatusComparisonWeb = $statusComparisonWeb

    Status = ![bool]$errorItems

    StatusEvents = $statusItems

    Error = [bool]$errorItems

    ErrorEvents = $errorItems

}

The output is shown here:

Image of command output

And that is finally all.

Remember that there are two ways to get my free code:

If you have any questions, please post them in the Comments section, and I will reply as soon as possible. Have a nice day.

~Rudolf

Thank you, Rudolf, for a great module and an excellent blog series. I look forward to hearing from you again in the future.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Display Current Time

$
0
0

Summary: Learn how to use Windows PowerShell to display the current time.

Hey, Scripting Guy! Question How can I use Windows PowerShell to display the current time on my computer?

Hey, Scripting Guy! Answer Use the Get-Date cmdlet and tell it to show only the time, for example:

Get-Date -DisplayHint Time

PowerShell Spotlight: Two PowerShell Events

$
0
0

Summary: Windows PowerShell MVP, Teresa Wilson, talks about two upcoming Windows PowerShell community events.

Microsoft Scripting Guy, Ed Wilson, is here. Today is the last Saturday of May, and therefore, it is PowerShell Spotlight day with MVP, Teresa Wilson...

Happy scripting Saturday, everyone. Today our Windows PowerShell Spotlight is on two events coming up in the next two weeks. First is the Cincinnati PowerShell User Group meeting on June 3. Ed will be the guest speaker at this event. I will be in the audience. For signup, time, and place information (and lots more, see at CincyPowerShell. The main information is:

Wednesday, June 3, 6:30 P.M. at MAX Technical Training
To register, see: Garbage in, Garbage out: Data grooming with Windows PowerShell

Everyone has heard the old adage "garbage in, garbage out" when talking about databases or other online data storage and retrieval systems. But do you know that Windows PowerShell can help you with your problem? Here is a description of Ed's talk:

In this session, Microsoft Scripting Guy, Ed Wilson, talks about using Windows PowerShell to perform data grooming. He shows how cleaning up names, street addresses, cities, states, and even zip codes by using basic string manipulation techniques. By focusing directly on the data transformation, he extracts principles that can be used in regards to the database or other data storage system. After focusing on the individual components of the process, he puts the whole thing into a single script to transform the sample data. This session is heavy with live demonstration.

Hope to see you there.

Next up is Saturday, June 13. We will be delivering the keynote and more at the IT Pro Camp in Jacksonville, FL. For more information and to register, see Jacksonville IT Pro Camp 2015. I mistakenly told several people this event was on Friday the 13th. Make sure you know that it is Saturday, June 13 from 8:00 A.M. to 5:00 P.M. (EDT).

The location is:

Keiser University Jacksonville Campus
6430 Southpoint Pkwy
Jacksonville, FL 32216

I do not know what time Ed and I have our sessions, but the keynote is listed as 8:30-9:00 A.M. Later in the day, I will be speaking about user groups and Ed will be making two presentations. In addition to his Garbage in, Garbage out: Data grooming with Windows PowerShell talk, he will be presenting Windows PowerShell 4.0 Best Practices. Here is a description of that talk:

Learn Windows PowerShell best practices as they apply to each stage of the script development lifecycle. See the differences between working interactively from the Windows PowerShell prompt, writing an inline script, adding basic functions and advanced functions, and implementing Windows PowerShell modules. What is a local best practice for Windows PowerShell development is not the same as a global best practice, and this talk covers those differences.

Maybe we will see you there! Have a great weekend.

~Teresa

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Maintain PowerShell Objects

$
0
0

Summary: Maintain Windows PowerShell objects in the pipeline.

Hey, Scripting Guy! Question How can I send output to the Windows PowerShell pipeline without changing the formatting or objects, plus
           insure that the output go to the Windows PowerShell console if there is nothing else in the pipeline?

Hey, Scripting Guy! Answer The Write-Output cmdlet sends output to the pipeline or to the console if it is the last command in the pipeline:

Get-Process | Write-Output

Weekend Scripter: What to Include in a PowerShell Comment Block

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, talks about what to include in a Windows PowerShell comment block.

Microsoft Scripting Guy, Ed Wilson, is here. Something that tends to confuse new scripters, regardless of the language, is what to include in a comment block at the beginning of a script. I have reviewed thousands of scripts for various scripters, and I have seen lots of variation. This variation ranges from nothing at all, no comments, not a thing to nearly complete books that describe everything a script should do, might do, and even ways to improve the script should it come time for revision.

In reality, your script is, well, your script. How you choose to add a comment block at the beginning of your script is, of course, your business. However, I have found a few things that I think should be in the script header comment block:

  1. The name of the script.
    Believe it or not, this is often left off a script. The reason, it seems, is that the file name of the script is the same as the name of the script, and therefore it is redundant information. OK. I can buy that. But at the same time, I like having the name of the script there. It makes it easier for me to identify the script when I am bouncing around between tabs—especially if the name is long. When the name is long, it has a tendency to become truncated on the tab.
  2. The author of the script.
    This may seem like a no-brainer, but apparently, it is not. In fact, I have seen many scripts written where no one wants to claim authorship. It is not that the script is a bad script. Rather, they do not want to support the script. It does not matter if you say the script is not supported, people will hunt you down and ask for changes, complain that it does not work in their Windows NT 3.51 environment, or that it blew up their computer's mail implementation.
    I get it...really I do. But when talking about what should be in a script block, clearly the person who wrote the script should include their name. This is really important in a corporate environment, but that is also where I seem to run across the desire to remain anonymous the most. The saying is, “Sure, I have time to write the script, I just do not have time to support it.”
  3. Author contact information.
    Of course, if you post to the Internet, you REALLY do not want to include your real email address. But at work, including your email address, and even phone number, is a good idea. It simplifies communication. Of course, this also works in conjunction with item #2.
  4. Version of the script.
    I like including a version number. This is especially true when it comes to supporting scripts. If a person says, “The script blew up,” but provides no version information, it is not all that helpful. But if they say, “I have version 1.2 and it blew up,” you can then reply, “Well, you need version 2.0 because I fixed that bug.” Assuming that there really is a version 2.0 and that you really did fix the bug.
  5. Where you got the idea.
    If you have a script that is based on a script that you found (for example, on my blog), it is a good idea to include the link to that script. Sure, it is nice to give credit, but from a practical standpoint, it is also a good idea because you will have reference information. On my blog, I include extensive discussion about the script, so information about where I got the code would also include a link to reference information about the script. On the other hand, you may have a link to a site that has lots of other scripts, and you do not want to lose that reference.

Now for things that are nice to have:

  1. The version of Windows PowerShell required.
    It is a good idea to include the version of Windows PowerShell. Of course, you can add a #Requires statement at the top of your script, but having it in the comment block is also a good idea.
  2. If elevation permissions are required.
    You can add a #Requires statement that indications if elevation is required, but this is a good thing to add to the comment block.
  3. If specific modules are required.
    Same as the comments for items #1 and #2.
  4. Comments about new types of constructions.
    If you came up with a brilliant idea that you have never used before, it is a good idea to document what you are doing and why.
  5. Comments about specific cmdlets.
    Same as the previous comment.
  6. Ideas for future improvement of the script.
    If you would like to add additional things to the script, but you did not have time when you were writing it, it makes sense to add a ToDo: section.
  7. If there are known errors.
    Most of the time the script may work fine, but occasionally, a script will blow up. If this is a known issue, document it. Of course, you should handle the error, but who knows…

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

Using Azure Automation: Part 1

$
0
0

Summary: Honorary Scripting Guy, Sean Kearney discusses how to get started with Windows PowerShell and Azure Automation.

Honorary Scripting Guy, Sean Kearney, is here, and for the next five days we're going to touch on Azure Automation, which is a great way to automate control of your Azure resources without Internet access.

Azure Automation is, when you get right down to it, a way to schedule Windows PowerShell workflows in the cloud.

"But wait!" you'll say. "If I want to do that, I can just as easily spin up a virtual machine in Azure and use the scheduled tasks in Windows Server 2012 R2!"

This is true. You could do that. But Azure Automation provides all of that without the need for a irtual machine. It is scalable and (as "The Doctor" would say, "Wait for it…")

...a way to store objects, such as system credentials, in a secure manner.

In the physical world, you might have seen this in action in System Center Orchestrator 2012 R2 with Service Management Automation (SMA). This is effectively SMA in Azure.

Let's start with the first bit. We need to create an instance of Azure Automation, which is yet another cloud service offering. This will only take a moment.

After logging in to the Azure portal, click Azure Automation in the left pane, and choose +NEW to create a fresh instance of Azure Automation.

Image of menu

At this point you can choose Quick Create, which will allow you to spin up an Azure Automation instance with a blank slate and no predesigned jobs.

Image of menu

If you're curious, clicking From Gallery will show you a plethora of templates with predesigned jobs for use in Azure as shown here:

Image of menu

For now, we're going to build a blank Runbook and a new Azure Automation account at the same time. The Azure Automation account holds variables and schedules that are unique to a set of Runbooks. (You'll hear Runbook, but really think "Windows Powershell workflows hosted in Azure.")

We need to supply some information to get this account created:

  • The name we're going to give our Runbook (Windows PowerShell workflow)
  • A description of what it's going to do (in the event that our name was written in Klingon or made no sense)
  • The Automation account (we're choosing to create a new one)
  • The name we're assigning to that Automation account (a sensible name)
  • Our subscription in Azure
  • Worldwide region to host it (this should be in the same region as the machines you're going to interact with to minimize any chance of latency)

In our case, from the following example, we're going to create a Runbook called HSG-ShutdownAzureVM, which could do ANYTHING, but you can tell that our intent is to have Azure shut down some virtual machines.

Our new Azure Automation account is called HSG-AzureAutomation. We'll click Create, and wait a few minutes as the new instance is automatically created.

Image of menu

When it's done, you'll see a new entry in Azure Automation called HSG-AzureAutomation.

Image of menu

Clicking this instance will allow us to work within it. We will click Runbooks to begin editing our Runbook/workflow for shutting down virtual machines in Azure.

Image of menu

The following image shows our Runbook called HSG-ShutdownAzureVM:

Image of menu

If we click it, we can see that a basic Windows PowerShell workflow named HSG-ShutdownAzureVM sits before us ready to be created as a draft. Why a draft? Within Azure Automation, you have the ability to have a draft copy of a production script that is being edited and tested before flipping it into production. Nice, eh?

Image of menu 

So we now have the most basic building blocks for some Azure Automation—an Azure Automation account and a Runbook.

By the way, do you want to know the fastest way to do all this? Two little Windows Powershell cmdlets are all that is needed, which are a part of the Azure module for Windows PowerShell. There is one to create the Azure Automation account and one to create the Runbook! So this same process done in Windows PowerShell would have looked like this:

$AutomationAccountName='HSG-AzureAutomation'

$NewRunbookName='HSG-ShutdownAzureVM'

$NewRunbookDescription='ShutdownAzureVM with Azure Automation'

$Location='East US'

New-AzureAutomationAccount -name $AutomationAccountName -location $Location

New-AzureAutomationRunbook –Name $NewRunbookName -Description $NewRunbookDescription –AutomationAccountName $AutomationAccountName

Pop by tomorrow and we'll get into some actual scripting with Azure Automation!

I invite you to follow The Scripting Guys on Twitter and Facebook. If you have any questions, send an email to The Scripting Guys at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, remember eat your cmdlets each and every day with a dash of creativity.

Sean Kearney, Windows PowerShell MVP and Honorary Scripting Guy 


PowerTip: Show Available Azure Automation Accounts

$
0
0

Summary: Use Windows PowerShell to get a list of available Azure Automation accounts.

Hey, Scripting Guy! Question How can I use Windows PowerShell to see my Azure Automation accounts?

Hey, Scripting Guy! Answer You can get this information with one simple cmdlet:

Get-AzureAutomationAccount

Using Azure Automation: Part 2

$
0
0

Summary: Honorary Scripting Guy, Sean Kearney discusses how to get started and test a basic runbook with Azure Automation.

Honorary Scripting Guy, Sean Kearney, is here. Today I'm going to show you the basics you're going to need to make a useful runbook (or Windows PowerShell workflow, if that makes it easier to remember) in Microsoft Azure. This is the second post in a five-part series. Be sure to read Using Azure Automation: Part 1 first.

I've started a very basic script block to get a list of virtual machines in Azure and stop them all. Nothing fancy.

Image of script

If I want to, I can click the Test button to see how well I did or did not fare with my simple typing. This will attempt to parse my script and make it do something.

Image of menu

I wait happily thinking to myself, "Ah ha! I am so smart! Got the work done the first time! Didn't have anybody proof it and…"

Apparently, I am not so smart because this message returns:

Image of message

The test system in Azure Automation caught something I messed up. Clicking the Details button reveals why I should have checked my glucose this morning!

Image of menu

Well, that was nice! Microsoft Azure caught where I goofed before I went to production (remember this runbook/ is all in draft mode right now). I scan my script and add the missing piece to correct (at least) my lapse in logic and my common sense.

Image of script

Sure enough. I forgot to put a semicolon in my Foreach loop!

But anybody who knows Azure knows this workflow is doomed to fail regardless. "Why?" you ask? Did anybody see me authenticate to Azure?

No? Oops!

Which gets us to the part where Azure Automation starts to rock. It's not just about running a hosted Windows PowerShell workflow. It's about doing it in a secure manner, including storing credentials!

Let's look at a basic script that has passwords in clear text doing pretty much the same thing (which is shutting down some virtual machines in Azure):

import-module azure

$pass=ConvertTo-SecureString 'Sup3rR@dk001' -AsPlainText -Force

$cred=New-object -TypeName system.management.automation.pscredential -ArgumentList 'AdminAccount@contoso.onmicrosoft.com',$pass

Add-AzureAccount -credential $cred

$VMlist=Get-AzureVM

 foreach ($VM in $VMList)

{

$Result=Stop-AzureVM -name $VM.Name -servicename $VM.ServiceName -Force

}

We could probably use this script as it stands if we replaced in the Azure Automation runbook.

Image of script

But wouldn't it be nice (oh so very nice) if we could store this securely?

Cue drum roll, lights, and shiny objects! Enter the POWER of Azure Automation assets! Assets are where you can place objects that can be centrally accessed by Windows PowerShell workflows within your Azure Automation account.

You can store simple stuff such as a URL for the company website or secure things—things you should not leave on a sticky note, things you shouldn't post on a whiteboard when a news reporter interviews you…

Yes! Passwords, of course!

To access the assets, go back to your Azure Automation account and select the Assets option.

Image of menu

Within this page, you can simply go to the bottom and choose Add Setting to add an asset to the list.

Image of menu

There are four types of assets you can add: a simple variable, a schedule, an Azure connection, or the one we're most interested in, a credential.

Image of menu

You'll get an option to provide one of two credential types: a standard Windows PowerShell credential or a certificate. We're going to populate a Windows PowerShell credential.

We give it a descriptive name and a very descriptive description (yes, I'm allowed to have fun here):

Image of text boxes

We now enter in our credentials, the UserID, and the password, which…

Oh my! You can't see it now can you?

Image of text boxes

I click the check mark and my asset is created!

Swing by tomorrow and I'll show you the next part, which is how to get that runbook to access those assets.

I invite you to follow The Scripting Guys on Twitter and Facebook. If you have any questions, send an email to The Scripting Guys at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then remember eat your cmdlets every day with a taste of creativity.

Sean Kearney, Windows PowerShell MVP and Honorary Scripting Guy 

PowerTip: Show Available Credentials for Azure Automation

$
0
0

Summary: Use Windows PowerShell to get a list of available assets of the Credential type in Azure Automation.

Hey, Scripting Guy! Question How can I use Windows PowerShell to quickly audit an Azure Automation instance for any credentials that
           might be there?

Hey, Scripting Guy! Answer Use the Get-AzureAutomationCredential cmdlet and provide the name of the Azure Automation account,
           for example:

Get-AzureAutomationCredential –AutomationAccountName 'HSG-AzureAutomation'

Using Azure Automation: Part 3

$
0
0

Summary: Learn how to access Azure Automation assets within a runbook.

Honorary Scripting Guy, Sean Kearney, is here today to battle robots, aliens, and…

…No, wait, sorry. My brain lapsed from watching an old rerun of Lost in Space.

This is the third post in a five-part series. To catch up, read:

Today I will continue showing you how to get up and running with Azure Automation with a simple project of having a runbook to shutdown virtual machines in Azure.

Our previous runbook with the fully exposed credentials and a clear-text password looked like this (I sense far too many security specialists shaking their heads at this):

Image of script

We can mitigate all of this silliness now by inserting the asset from the Azure Automation that we created yesterday.

This is done by choosing the place in the runbook where we would like the asset, clicking Setting, and choosing Insert to select an asset. In the following example, I created a new line for assigning the $cred object that will be inserted in the code from Azure Automation:

Image of script

When I click Insert, a wizard appears, which allows us to choose the asset. It will also automatically build the needed cmdlet to access the asset.

In the following example, we are going to access our newly created object called AzureManagementCredentials. (Aren't you glad we gave it a useful name instead of something like TribbleFodder?)

Image of menu

When we return to the runbook, we'll see a new line of code appended to the $Cred= portion:

Image of script

We now delete the two lines of code in the previous runbook that used the UserID and Password to get this result.

Image of script

At this point, we click the Test button to confirm that everything is working properly in our script.

It seems to work, but we have nothing to tell us what, why, or how something happened? Come back tomorrow, and I'll show you some simple things to finish up this runbook and promote it in production!

I invite you to follow The Scripting Guys on Twitter and Facebook. If you have any questions, send an email to The Scripting Guys at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then remember eat your cmdlets every day with a taste of creativity.

Sean Kearney, Windows PowerShell MVP and Honorary Scripting Guy

PowerTip: Show Runbooks in Azure Automation Instance

$
0
0

Summary: Use Windows PowerShell to show available runbooks in an Azure Automation instance.

Hey, Scripting Guy! Question How can I use Windows PowerShell to show which runbooks are available in an Azure Automation instance?

Hey, Scripting Guy! Answer Use the Get-AzureAutomationRunbook cmdlet from the Azure module. For example, 
           to get the runbooks in the 'HSG-AzureAutomation' instance, type:

Get-AzureAutomationRunbook –AutomationAccountName 'HSG-AzureAutomation'

Viewing all 3333 articles
Browse latest View live




Latest Images