Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Number 1 Reason to Upgrade to PowerShell 4.0: Desired State Configuration

$
0
0

Summary: Windows PowerShell MVP, Dave Wyatt, says the number 1 reason to upgrade to Windows PowerShell 4.0 is Desired State Configuration.

Microsoft Scripting Guy, Ed Wilson, is here. Today, Dave Wyatt, is back to share his ideas about upgrading to Windows PowerShell 4.0.

Dave is a Microsoft MVP (Windows PowerShell) and a member of the board of directors at PowerShell.org. He works in IT, mainly focusing on automation, configuration management, and production support of Windows Server and point-of-sale systems for a large retail company.

Blog: Dave Wyatt's Blog
Twitter: @MSH_Dave

When I got an email from the Scripting Guy asking for ideas for a series of posts about “Why upgrade to PowerShell 4.0?”, I replied that they should be called DSC, DSC, DSC, DSC, and DSC. As you have seen in the other posts this week, there are some neat new features in Windows PowerShell 4.0, but in my opinion, they’re small potatoes compared to the introduction of the Desired State Configuration feature.

At its core, Desired State Configuration is a configuration management engine. You can think of it like Group Policy on steroids. Instead of being limited to configuring registry values and a few other simple parts of Windows, a DSC configuration can quite literally do anything.

DSC embraces the DevOps staples of automation and “infrastructure as code.” By defining your server configurations in DSC, you can reinstall the operating system on a server, or you can duplicate its configuration to a new server or virtual machine and have the proper configuration applied with very little manual intervention, if any.

From a pure Windows point-of-view, DSC consists of the following parts:

  • Local Configuration Manager: An agent that runs on all Windows computers that have Windows PowerShell 4.0.
  • DSC resources: Small Windows PowerShell modules designed to configure a specific type of item on a computer.
  • MOF documents: Standards-based text files that are read by the Local Configuration Manager. They tell it which resources to execute and what parameters to pass to these resources.
  • Configuration scripts: Scripts written in Windows PowerShell that are used to produce the MOF documents.
  • Configuration data: A set of parameters passed to the configuration script.
  • Pull Server and Compliance Server: OData-based web endpoints that can serve resources and MOF documents to the Local Configuration Manager and can keep track of the compliance status.

The exciting thing about Desired State Configuration is that every one of these items doesn’t have to be tied to the Windows operating system and Windows PowerShell. Because of that MOF document layer, you can use DSC in a number of ways, for example:

  • You could use the features listed previously as-is, and manage Windows computers from Windows computers.
  • You could use an existing configuration management toolset, such as Chef, in conjunction with DSC, where Chef would be responsible for producing the MOF documents that are sent to Windows endpoints (essentially, managing computers running Windows from Linux computers).
  • You could write configuration scripts in Windows and send them to Linux computers—thanks to the open management infrastructure (OMI), which includes a Linux version of the Local Configuration Manager. In this situation, you would have Linux-specific resources that are not Windows PowerShell modules, but rather, scripts or executables that run on Linux.

This is not the Microsoft I grew up with! Instead of making some proprietary piece of software that can only be used in basically one way, this architecture can play well with any non-Microsoft software vendor who wants to make use of it.

I’ve heard it said that DSC is an API that is intended to be used by other developers to produce a complete configuration management solution, and I think that’s a great description. The first version of it (as released with Windows PowerShell 4.0) can be a little bit rough around the edges, but Microsoft is putting a huge amount of effort into this feature, and it’s getting better every day.

There’s really no limit to the configuration management you can do with DSC, but let’s take a look at a quick and simple example. In our dummy configuration, we’ll create a .ini file and specify the contents that should be in that file. The configuration script looks like this:

Image of command output

Now, let’s take that .mof file and apply it to our test server:

Image of command output

As you can see from the screenshot, the Sample.ini file (and the entire C:\Temp folder, for that matter) did not exist prior to applying the configuration. After the configuration was applied and the file created, I manually deleted the file. The next time the Local Configuration Manager checked the machine’s state, it saw that the file needed to be re-created with the specified contents.

~Dave

Thank you, Dave, for your willingness to share your time and knowledge.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


PowerTip: Use PowerShell to Keep Servers Correctly Configured

$
0
0

Summary: How do I keep my servers configured correctly?

Hey, Scripting Guy! Question I can write Windows PowerShell scripts to configure my machines, but how do I enforce these configurations
           after the script has finished?

Hey, Scripting Guy! Answer This situation is called "configuration drift." You’ve set up your server properly, and then over time,
           little changes are made that lead to an unknown production state. To manage configuration drift,
           use Desired State Configuration (DSC).

           DSC allows you to write your desired configuration in an easy-to-understand
           form, and then DSC takes care of periodically checking the state of your server to see if it matches the desired
           state. If it doesn’t, you can configure DSC to automatically fix the issues, or to simply report that something
           has changed.

Note  Today’s PowerTip is provided by Windows PowerShell MVP, Dave Wyatt.

Weekend Scripter: Team Talk About Why to Upgrade to PowerShell 4.0

$
0
0

Summary: Windows PowerShell program manager, John Slack, talks about upgrading to Windows PowerShell 4.0.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have John Slack, a program manager in the Windows PowerShell team as our guest blogger...

Do you use the version of Windows PowerShell that ships with your operating system? Do you wonder if you should make the jump and install Windows PowerShell 4.0 or try out Windows PowerShell 5.0 Preview?

Here’s the short answer: I think you should.

Want more of an explanation? At a high level, I see the following benefits to updating:

  • New features make your life simpler.
  • You stay ahead of the curve by learning about new features.
  • Your feedback makes Windows PowerShell better.

New features make your life simpler

Each successive version of Windows PowerShell adds features and improves existing functionalities. These features are designed to help you get your job done faster and more effectively. Jumping from Windows PowerShell 2.0 to Windows PowerShell 4.0 can give you huge benefits. The list of improvements is significant, and it can’t really be condensed into a single blog post, but I’ll try to detail a few representative samples.

  • Desired State Configuration (DSC)
    If you need to configure machines running Windows in a DevOps friendly way, DSC (introduced in Windows PowerShell 4.0) is a significant step up from complicated and fragile configuration scripts. For more information, see about_DesiredStateConfiguration.
  • Windows PowerShell remoting features
    If you need to managing remote machines, the Disconnected Sessions feature (introduced in Windows PowerShell 3.0) can make your life much easier. The remote debugging features in Windows PowerShell 4.0 can also help. For more information, see about_Remote_Disconnected_Sessions.
  • Performance improvements
    The parser in Windows PowerShell 3.0 significantly improved performance.
  • Feedback
    Little bugs in Windows PowerShell may have been fixed in the latest version. If they haven’t, we need your input, so report them on the Windows PowerShell Customer Connection site.

Best of all, these improvements are additive. The scripts you wrote in Windows PowerShell 2.0 should continue to work in Windows PowerShell 4.0.

For more information about improvements, see What's New In Windows PowerShell.

Note  I’m not suggesting that you should upgrade everything without any testing. I’m simply saying that backwards compatibility is a huge priority for the team, so things should go fairly smoothly.

You stay ahead of the curve

Let’s say you can’t convince your management team that all of these improvements are worth it. I still think it’s worth your while to upgrade one or two machines to the latest version of Windows PowerShell. By doing so:

  • You can come up to speed on the new features without impacting your production environment.
  • You continue to grow your Windows PowerShell skills.
  • When you do upgrade to Windows Server 2012 R2 or Windows 10, you’ll be ready to take advantage of the new features.

Your feedback makes it better

If you do upgrade to the latest version of Windows PowerShell, you have a unique opportunity to improve it for everyone—especially if you’re using the latest WMF Preview. If you’re trying out new features and you think that they can be improved, please provide feedback via the Windows PowerShell Customer Connection site. Well thought out feedback can shape the priorities and the direction of the product.

To summarize, upgrading to the latest version of Windows PowerShell is a good idea because:

  • Newer versions have productivity-boosting features.
  • You continue to grow your Windows PowerShell skills.
  • Your feedback shapes the future direction of Windows PowerShell.

Thank you!

~John

Thanks, John, for taking the time to share with our readers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Test Connectivity to Remote Servers without Ping

$
0
0

Summary: Use Windows PowerShell to test connectivity to remote servers without using Ping.

Hey, Scripting Guy! Question How can I test connectivity to remote servers to ensure my Windows PowerShell scripts will work if I
           have the Packet Internet Grouper (PING) blocked at the firewall?

Hey, Scripting Guy! Answer Use Test-WsMan, for example:

's1','s2' | % {Test-Wsman $_}

AutomatedLab Tutorial Part 1: Introduction to AutomatedLab

$
0
0

Summary: Microsoft PFEs, Raimund Andree and Per Pedersen, present a multipart series about using Windows PowerShell to deploy a solution called AutomatedLab.

Microsoft Scripting Guy, Ed Wilson, is here. Today we kick off a multiple part series about automated lab deployment by using Windows PowerShell. Written by two Microsoft premier field engineers, Raimund Andree and Per Pedersen, this series focuses on lots of good stuff, and it culminates with a way cool Windows PowerShell script. This is a series you do not want to miss. Today they discuss the architecture of AutomatedLab...

We all know the situation: Your company wants to upgrade some software product or wants to introduce something new, and you need to test this in a lab environment that looks somewhat similar to your production environment.

Let’s say you need to test the integration of a software product with Active Directory. No big deal. You install a hand full of servers. Because you have more to do than watch the installation process, you take care of a lot of other tasks while setting up the lab. While you are taking care of these often undocumented tasks, you might miss one or two important settings for your installation—settings that you cannot change, for example, the forest functional level or domain name.

Now, you need to start from the beginning and invest even more time. Or even worse, you do not realize that something went wrong with the setup, and you end up with unreliable test results.

It is a best practice to automate these processes as much as possible. It is faster, more efficient, and (what might be even more important), more reliable. If you run a script two times, you will get the same result two times. However, if you set up a lab with five machines more than once, it is quite possible that these labs will have some differences.

The people working on AutomatedLab are consultants and field engineers who work with very different customers. On several occasions, we need to write code based on specific infrastructure requirements, or we have to troubleshoot a certain issue. Obviously having only one test environment does not work. There are too many different designs, software versions, and customer specific configurations. Hence, setting up labs to include all of these parameters can be quite time consuming.

What AutomatedLab includes

AutomatedLab is a solution to help you install a lab scenario in a very short time, with the complexity of your choice. For example, setting up a domain with a single server or client takes about 15 minutes. Setting up a big lab with more than 10 machines takes about 1.5 to 2 hours (largely depending on the speed of your disk). You can specify all the important settings, such as names, IP addresses, network configuration, operating system version, and forest functional level. You can also assign roles to machines like IIS, Exchange Server, or domain controller. Installation of any machine with any role in AutomatedLab, is of course, unattended.

To enable you to get started quickly, AutomatedLab comes with a bunch of sample installation scripts that cover very different scenarios. There are sample installation scripts for simple things like single clients or domains with only one member machine. Also, there are sample installation scripts that install Exchange Server 2013, SQL Server 2012, Orchestrator 2012, a client with Visual Studio 2013, and a web server. Imagine how long it would take to install all of this manually. AutomatedLab does it in about 2 hours on an SSD drive.

Note  The purpose of AutomatedLab is for installing lab and test environments. It is not meant for production scenarios.

How AutomatedLab works

AutomatedLab is provided as an .msi file, and you can download it from CodePlex: AutomatedLab. The file includes Windows PowerShell modules, sample scripts, documentation, and lab sources.

Windows PowerShell modules

90% of AutomatedLab is pure Windows PowerShell script. However, some code had to be done in C#. Windows PowerShell modules provide a nice and easy way to create script packages. If you need more information about how Windows Powershell modules work, take a look at about_Modules.

AutomatedLab comes with seven Windows PowerShell modules, which will be discussed in more detail later in this post. These modules need to be in the correct path to get automatically loaded. The default location of Windows PowerShell modules that are being installed is in the WindowsPowerShell folder of the Documents folder of the Windows user who is installing AutomatedLab.

Sample scripts

There are a number of sample scripts that demonstrate the capabilities of AutomatedLab. You may have to change a sample script prior running it to make it work on your machine. Each sample script looks for a folder called LabSources on drive E, and it tries to install the virtual machines on drive D. Please change these drives to match the drives in your computer.

Of course, you can also remove or add machines. This requires minimal knowledge of Windows PowerShell, but it is merely copying and pasting the calls to Add-LabMachineDefinition and changing the parameters to the values of your choice.

Documentation

Yes, this is the explanatory documentation. By default, this is copied to the Documents folder of the Windows user who is installing AutomatedLab.

Lab sources

This is a folder hierarchy that is created by the installer to provide all the sources for installing a lab environment. The sources include:

  • ISOs

Place all ISO images referred to in the installation script in this folder. If you download images from MSDN or other sources, the names may vary. So please make sure you change the installation script accordingly.

  • PostInstallationActivities

AutomatedLab comes with some built-in, role-dependent post-installation activities. These include creating Active Directory users, installing the well-known sample databases on SQL Server, and creating trust relationships between all installed Active Directory forests. How to write your own post-installation activities will be explained later.

  • SoftwarePackages

AutomatedLab provides you with an easy way to install software on some or all lab machines. All you need to know is how to install it silently. For example, to install Notepad++ on all lab machines, you need two lines of code and a copy of the installer executable for Notepad++ in this folder.

  • Tools

AutomatedLab can copy a folder to the virtual machine while creating it. By default, AutomatedLab will install the SysInternals tools on each virtual machine by using this Tools folder. Everything you put in here is going to be copied to each virtual machine (to the folder C:\Tools of the virtual machine). The next posts will tell more about this.

Windows PowerShell modules

AutomatedLab comes with seven Windows PowerShell modules. Why do we have seven modules and not only one like most products? We have tried to separate the solution into its main building blocks. This makes the coding and troubleshooting easier.

The next post will explain how to use the cmdlets in each Windows PowerShell module. (This post gives you an overview, but it is not explaining how to create your own lab.)

Let’s discover the Windows PowerShell modules one-by-one.

AutomatedLabDefinition

This module is for gathering your requirements. It contains cmdlets for defining domains, machines, roles, virtual network adapters, ISO images, and post-installation activities. AutomatedLab is based on .xml files. However, the cmdlets provided with AutomatedLabDefinition enable you to define the desired configuration by using PowerShell cmdlets. Hence, no XML knowledge is required. If you need to make it persistent, when the configuration is complete, you can export it to an .xml file.

AutomatedLab

This is the main module. It starts all the actions based on the configuration you have created (and exported to a lab .xml file) with AutomatedLabDefintion. After this export, AutomatedLab needs to import the lab .xml file. This task is performing a number of validations to make sure the specifications are valid (such as checking for duplicate names or IP addresses and making sure that all specified paths are valid). If all validations are successful, the actual lab installation can be started. All the details will be discussed in the next post.

AutomatedLabUnattended

The actual installation of the operating systems is performed by using a classic unattended setup. This is based on an .xml file that contains the machine details. AutomatedLabUnattended modifies the standard .xml file and is used only internally. There is no meaningful use case outside AutomatedLab.

AutomatedLabWorker

We have tried to separate the main parts of the solution into two modules. AutomatedLab is based on .xml files, and it has connections to most of the other Windows PowerShell modules, whereas AutomatedLabWorker is more independent and handles workloads. These workloads are triggered by functions of AutomatedLab. AutomatedLabWorker does not have any relationships with the .xml files or the intelligence built around the lab.

HostsFile

AutomatedLab is heavily based on Windows PowerShell remoting (this will be explained later). Windows PowerShell remoting needs proper name resolution. Because we are creating labs, we cannot use the corporate DNS. Furthermore, we do not want the lab machines in DNS. That’s why AutomatedLab uses the good old hosts file to make sure it can contact the machines. The names of the lab machines, and thereby, the entries in the hosts file, are short names (not fully qualified domain names). This is to avoid using Kerberos protocol when authenticating to the lab machines.

PSFileTransfer

Any time AutomatedLab copies something to a lab machine, it uses Windows PowerShell remoting, never SMB. This module is based on some functions by Lee Holmes.

PSLog

This module does verbose logging and also writes all messages into a central location.

VHDs

AutomatedLab uses differential disks to save disk space and speed up the overall installation process. This requires AutomatedLab to create the parent disks first—one disk per operating system. So if you have a lab with a 10 machines, where some machines are running Windows Server 2012 R2, some machines are running Windows Server 2008 R2, and one client is running Windows 8.1, you will have three parent disks. All virtual machines refer to one of these parent disks. When starting a lab deployment, you will see that we interact with the VHDs (that is, mount them).

Windows PowerShell remoting

AutomatedLab requires an internal or external virtual network adapter for the virtual machines to use. A private virtual network adapter is not supported because AutomatedLab on the host machine needs to be able to talk to the virtual machines.

The host creates the VHDs and the virtual machines. After starting the lab machines, they become available. And thanks to the modified hosts file, AutomatedLab can contact the virtual machines by name.

By leveraging Windows PowerShell remoting, AutomatedLab can invoke any command or script block on the lab machines. This is similar to PSExec, but more comfortable and powerful. There are lots of related resources on the Internet, but to get started, check out about_Remote and the PowerShell 2.0 remoting guide.

Following is an example of how a domain controller is installed:

  • AutomatedLab creates a parent disk (if this doesn’t exist already).
  • AutomatedLab creates the differential disk and the virtual machine.
  • AutomatedLab starts the virtual machine and waits until the virtual machine is reachable (ICMP and WSMan).
  • When the virtual machine is reachable, AutomatedLab (running on the host machine) sends the script block for domain controller promotion to the lab machine and invokes it.
  • AutomatedLab waits until the virtual machine has restarted.
  • AutomatedLab waits for the virtual machine to be reachable again.

That’s it. The domain controller installed.

Some actions require a lab computer to contact another lab computer. This requires a "double-hop" authentication, which is not enabled by default for security reasons. AutomatedLab uses CredSSP to make this work. For more information about CredSSP, see PowerShell 2.0 remoting guide: Part 12 – Using CredSSP for multi-hop authentication.

AutomatedLab and Windows PowerShell remoting requirements

There are a couple of things that needs to be configured on the host machine before remoting works in a lab scenario. By default, Windows PowerShell remoting uses Kerberos protocol for authentication. Kerberos protocol will not work between the host and the lab machines because a number of requirements are not fulfilled. That’s why AutomatedLab uses NTLM.

However, by default, Windows PowerShell remoting connect sonly to trusted machines. “Trusted” relates to the domain membership. However, the lab machines are either standalone or members of an Active Directory domain that is not trusted by the domain of the host machine. Hence, the TrustedHosts setting has to be modified, to allow the host machine to talk to the lab machines.

Before the lab installation can be started, the Set-LabHostRemoting cmdlet must be called. (This call is part of any AutomatedLab sample script). This function enables Windows PowerShell remoting on the host computer and sets TrustedHosts to “*”. This means that we can connect (authenticate) to any computer by using Windows PowerShell remoting. Additionally, unencrypted traffic will be allowed and CredSSP authentication is enabled so that the host computer can send the necessary credentials to the lab machines. A lab machine can pass it to another lab machine.

Checkpoints

If you want to test something in a lab because it is dangerous, you may want to create a checkpoint first. Manually creating a checkpoint of one or two machines is not a big deal. But what if your lab contains 10 or 20 machines? Then you might want to use the functions that are provided for this: Checkpoint-LabVM and Restore-LabVMSnapshot.

These functions can create and restore checkpoints for all lab machines when you specify the All switch. If a certain checkpoint is no longer needed, you can use the function Remove-LabVMSnapshot. This is a rapid way to freeze the state of a full test lab.

Note  When you plan to take a snapshot and restore domain controllers, make sure you have an understanding about USN rollback and virtualization safeguards introduced in Windows Server 2012. By default, the GenerationID feature is disabled on all domain controllers running Windows Server 2012 that are set up by AutomatedLab.

Post-installation activities and remote command execution

Post-installation activities are used to perform actions (configurations) on lab machines after the lab machines have been installed. A post-installation activity is a task that can be executed after a machine is ready and any role is installed.

A post-installation activity calls the specified script on the machine it is mapped to. You do not need to care about authentication because this is handled by AutomatedLab. If your script is based on files or needs to access an .iso file, this is automatically copied or mounted for you. The next posts will explain this in more detail.

Software packages

Another cumbersome task when setting up labs is installing software on all the machines. AutomatedLab takes over this task. All you need to provide is the name of the executable (.exe) or .msi file to install. If you use an executable, you need to use a command-line parameter to switch to a silent installation. The sample scripts  provide a one-liner to install the software package on all lab machines.

The next part of this series goes through the installation of a simple lab environment. You will learn about the many functions AutomatedLab provides, and by the end of the post, you will have a lab installed.

~Raimund and Per

Thanks Raimund and Per. This is way cool stuff.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Find Syntax of PowerShell Cmdlet

$
0
0

Summary: Easily find the syntax of a Windows PowerShell cmdlet.

Hey, Scripting Guy! Question How can I use Windows PowerShell to see the syntax of a cmdlet I want to use?

Hey, Scripting Guy! Answer Use the Get-Command cmdlet (gcm is an alias) with the –Syntax switch, for example:

gcm Checkpoint-Computer -Syntax

AutomatedLab Tutorial Part 2: Create a Simple Lab

$
0
0

Summary: Learn how to create a small lab environment by using AutomatedLab.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have Part 2 of the series by Microsoft PFEs, Raimund Andree and Per Pedersen. Today they talk about installing the Windows PowerShell modules to use with AutomatedLab, and the steps required to create a simple lab. If you follow along, you will have AutomatedLab installed on your Hyper-V host and a lab set up with two machines.

Note  Before you read this post, you might benefit from reading AutomatedLab Tutorial Part 1: Introduction to AutomatedLab.

Installation

The installation of AutomatedLab is very easy. You can find the information you need on Microsoft TechNet: AutomateLab. The download consists of one .msi file.

After starting the installer (the .msi file), you will see three options: Typical, Custom, or Complete. If you have more than one logical disk (or even better, more than one physical disk), you can take advantage of it and choose the Custom setup.

Image of menu

In time, the Lab Sources folder can grow quite large, so you might want to choose a drive other than drive C.

Note  Feel free to change installation folders as you wish. However, to automatically import the Windows PowerShell modules that come with AutomatedLab, the modules must be installed in either the user-specific path (C:\Users\<user>\Documents\WindowsPowerShell\Modules) or the computer-specific path (C:\Windows\System32\WindowsPowerShell\v1.0\Modules).

Assuming that you did not change the default directories, after you perform the installation, you will have seven new Windows PowerShell modules in the user-specific path for Windows PowerShell modules: C:\Users\<user>\Documents\WindowsPowerShell\Modules.

In your documents folder, you will find the sample scripts and the documentation.  The folder names are AutomatedLab Sample Scripts and AutomatedLab Documentation.

An important part is the folder hierarchy of Lab Sources. (The purpose of this folder was explained in AutomatedLab Tutorial Part 1: Introduction to AutomatedLab.)

Image of menu

Installation done.

Prerequisites for installing the first lab

Three requirements need to be verified, before a lab can be created:

  1. AutomatedLab requires Hyper-V in Windows Server 2012 R2 or Windows 8.1
  2. The ISO files to be used during creation of the lab need to be present in \LabSources\ISOs.
  3. Windows PowerShell remoting needs to be enabled on the host machine.

The first thing to do is to acquire ISO files for Windows Server 2012 R2 and Windows 8.1. The ISO files must provide a Standard or Datacenter version of Windows Server 2012 R2 and a Professional or Enterprise version of Windows 8.1.

If you want to know which operating system versions your ISO files contain, use the following command:

Get-LabAvailableOperatingSystems -Path <path>

Image of command output

Enable Windows PowerShell remoting on the host computer (described in the previous post). This is required, and it has to be done only once.

Open an elevated command prompt in Windows PowerShell and type the command Set-LabHostRemotingThe output looks like this:

VERBOSE: starting log

VERBOSE: Set-LabHostRemoting Entering... )

VERBOSE: Remoting is enabled on the host machine

WARNING: TrustedHosts does not include '*'. Replacing the currernt value '' with '*'

VERBOSE: Local policy 'Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Delegating Fresh Credentials' configured correctly

VERBOSE: Local policy 'Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Delegating Fresh Credentials' configured correctly

VERBOSE: Set-LabHostRemoting...leaving.

Note  Because running the command Set-LabHostRemoting sets TrustedHosts to ‘*’, your host computer will now allow (any) credentials to be sent from your host computer to any other computer. This is a security risk that you need to be aware of. If you will not allow this, you will need to modify this configuration manually after running the command Set-LabHostRemoting.
To modify trusted hosts:

1. Opening gpedit.msc, go to Computer Configuration > Administrative Templates > System > Credentials Delegation.
2. Modify the following policies: Allow delegating fresh credentials and Allow delegating fresh credentials with NTLM-only server authentication.
3. Change content to only allow credentials to be sent to the computers of your choice. For AutomatedLab to work, you will need to enter names for all virtual machines deployed by using AutomatedLab.

Create the first lab

When the prerequisites are fulfilled, a lab can be created. Creating a lab consists of several tasks, which are described in the following sections of this post.

Basic metadata information

First you need to define the metadata information regarding where to look for data and where to store data. The files that need to be read are the files in the LabSources directory. You need to define information about the lab definition (to make is persist when exiting the Windows PowerShell session and restarting the host computer). Also, information about where to store the virtual machines is needed.

Note  For performance reasons, it is a good idea to place the virtual machines on a separate (physical) drive from the operating system, and preferably this drive should be an SSD. AutomatedLab uses a parent disk for each operating system, and creates differentiating disks for each virtual machine using the parent disks as base disks. By placing the ISO files on a separate drive from the virtual machines (where the reference disks are also stored), creating the reference disks will be faster (although this task only needs to be executed once because the parent disks are never modified after they are created).
Also, for performance reasons, it is highly recommended to exclude the folder of the ISO files and the folder for the virtual machines from being scanned by antivirus software (real-time scanning).

In the following example, Lab Sources (containing the ISO files) is placed on drive E (which is a mechanical drive), and he virtual machines will be created on drive D (which is an SSD):

$labSources = 'E:\LabSources' #here are the lab sources

$vmDrive = 'D:' #this is the drive where to create the VMs

$labName = 'FirstLab' #the name of the lab, virtual machine folder and network switch

#create the folder path for the lab using Join-Path

$labPath = Join-Path -Path $vmDrive -ChildPath $labName

#create the target directory if it does not exist

if (-not (Test-Path $labPath)) { New-Item $labPath -ItemType Directory | Out-Null }

Define the lab

A lab definition is a set of data that contains information about the lab environment, such as the lab name, network adapter, the domains, and the configuration of the machines (for example, name, number of CPUs, amount of memory, machine roles). This definition is created by calling New-LabDefinition.

The following command will create a new lab definition:

New-LabDefinition -Path $labPath -VmPath $labPath -Name $labName -ReferenceDiskSizeInGB 60

Define the network

All virtual machines in the lab need to have a network adapter. This network adapter is connected to a Hyper-V virtual switch. This Hyper-V virtual switch needs to be created for the lab. By default, this Hyper-V virtual switch will be created as an internal switch, providing the host computer network access to the virtual machines and vice versa. However, the virtual machines cannot connect to your physical network or vice versa. Here, for simplicity, the network name will be the same as the lab name:

#define the network

Add-LabVirtualNetworkDefinition -Name $labNetworkName -IpAddress 192.168.81.1 -PrefixLength 24

In this scenario, the lab is going to use IP addresses in the subnet 192.168.81.x, which has the subnet mask 255.255.255.0. Possible IP addresses are 192.168.81.1-192.168.81.255

Note  The IP address 192.168.81.1 will automatically be used as the IP address of the Hyper-V virtual switch that is seen from the host operating system. Therefore, this IP address cannot be used for virtual machines in the lab.

Define domains

If you want to create only standalone machines (machines in a workgroup), no domain is required. However, in this lab, there will be one domain with one domain controller and one member server. As such, a domain needs to be defined for the lab.

This domain definition will include the domain name, the administrator, and the password of the administrator. This account is used for all domain-related tasks such as the domain controller promotion, setting the DSRM/DSREPAIR password, and joining machines to the domain.

#domain definition with the domain administrator account

Add-LabDomainDefinition -Name test1.net -AdminUser administrator -AdminPassword Password1

Add ISOs to the lab

All required ISO files for software (operating systems, server applications, and client applications) to be used in the lab need to be defined for AutomatedLab to find it. If you add an operating system ISO, you need to mark it with the IsOperatingSystem switch. The Name parameter is used only to uniquely identify the ISO definition, and it is not used elsewhere. The name is your choice, but it has to be unique in the lab.

#these images are used to install the machines

Add-LabIsoImageDefinition -Name Server2012R2 -Path $labSources\ISOs\ en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso

–IsOperatingSystem

Choose credentials to be used to connect to machines

For the initial setup of each virtual machine, a user name and password is needed to define the user name and password of the local administrator. This information is stored in a file named unattended.xml. This file is then used to perform the initial set up for each virtual machine.

Disclaimer  Creating a PSCredential object by using a plain text password is absolutely not a best practice for any production environment. However, because this lab environment will not be a production environment, a plain text password will be used.

#these credentials are used for connecting to the machines. As this is a test lab we use clear-text passwords

$installationCredential = New-Object PSCredential('Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force))

Define roles

Using roles in AutomatedLab will be the easiest way to customize a machine and set it up as required. In version 2.1.0, AutomatedLab supports the following roles: 

RootDC

Root domain controller for a domain

FirstChildDC

First child domain controller for a domain

DC

Additional domain controller for a domain

DHCP

DHCP server role

FileServer

File Server role

WebServer

Web Server role (all web role services)

SQLServer2012

SQL Server 2012 with default instance

Exchange2013

Exchange Server 2013

Orchestrator

System Center Orchestrator 2012

CaRoot

Enterprise or standalone Root Certificate Authority
(Windows Server 2012 or Windows Server 2012 R2)

CaSubordinate

Enterprise or standalone subordinate certification authority
(Windows Server 2012 or Windows Server 2012 R2)

Office2013

Microsoft Office 2013

DevTools

Visual Studio 2013 and Visual Studio 2012

In this lab, only a single role will be used: the RootDC role. The lab will contain a domain, so a new Active Directory forest is required—and this, of course, requires a root domain controller. A requirement for defining a root domain controller is to provide the ForestFunctionalLevel and the DomainFunctionalLevel.

This information is specified by using a Properties parameter, which is a hash table. There is no documentation yet for this part, so this is a good opportunity to make mistakes. However, avoiding mistakes in regards to this parameter is not crucial because all the lab definitions will be validated by AutomatedLab before the installation is actually started.

In our case, the validation component makes sure that the ForestFunctionalLevel and DomainFunctionalLevel parameters are defined, and that they are not higher than the version of the operating system of the machine.

The cmdlet Get-LabMachineRoleDefinition has a role parameter and a properties parameter. Although the properties parameter is not mandatory for some roles, it is mandatory for domain controllers. Calling Get-LabMachineRoleDefinition returns a role object. This role object is later used when defining a lab machine.

$role = Get-LabMachineRoleDefinition -Role RootDC -Properties @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Define a lab machine

The first lab machine is defined by using the cmdlet Add-LabMachineDefinition. Here, the name of machine is provided in addition to basic information such as number of CPUs, memory, and IP configuration.

Furthermore, it can be specified that the machine is to be domain-joined, will be in the domain test1.net, and that the machine should have the root domain controller role (defined in the previous step). This effectively results in defining that we want a root domain controller in the forest test1.net.

Next, you need to provide the installation credentials. These credentials will be required to connect to the machine later for doing administrative tasks.

The ToolsPath parameter is optional, and it points to a folder of your choice on the host. If you use this parameter, AutomatedLab will copy the contents of this folder to all of the virtual machines in the lab, prior to starting them. This is an effective way of copying the tools and utilities of your choice to all of the virtual machines in the lab.

Finally, the operating system needs to be defined. This parameter is restricted to a number of supported operating systems. IntelliSense in Windows PowerShell helps you finding the desired value. Make sure to provide the value in quotation marks (after IntelliSense) because the operating system names contain white spaces.

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

The first machine is now defined.

To add a second machine to the lab, merely a copy and paste the previous command. Only the name and the IP address are different. In addition, this virtual machine will not be associated with any role. If no role is defined, the virtual machine will simply be a member of the domain.

Add-LabMachineDefinition -Name S1Server1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.20 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Export the lab

After configuring all the lab definitions, you can export the lab configuration by using the Export-LabDefinition cmdlet. This cmdlet creates two XML files in the directory ‘D:\FirstLab’. These files contain the configuration of the lab, which makes them persistent.

Export-LabDefinition -ExportDefaultUnattendedXml –Force

Image of file names

The Force switch overwrites existing files without asking for permission.

Install the first lab

After you have configured all the lab definitions and the configuration has been saved to disk, the lab can be installed. This is done by using the Install-Lab cmdlet. By using different switches, you can activate different tasks to be performed by Install-Lab. The following table lists the switches that are currently available.

Switch

Task

NetworkSwitches

Creates the virtual switch

BaseImages

Creates all parent disks

VMs

Creates and configures all the Hyper-V virtual machines

Domains

Starts installation of all domain controllers starting with root domain controllers, then child domain controller and additional domain controllers

DHCP

Installs DHCP role on machines where this role is specified

CA

Installs the Certificate Authority roles if specified

PostInstallations

Starts on post installation actions

SQLServer2012

Installs SQL server on machines where this role is specified

Orchestrator

Installs SC Orchestrator on machines where this role is specified

WebServers

Installs Web server role on machines where this role is specified

Exchange2013

Installs Exchange Server on machines where this role is specified

DevTools

Installs DevTools (Visual Studio) on machines where this role is specified

Office2013

Installs Office 2013 where this role is specified

StartRemainingMachines

Starts all remaining machines (the machines which has not yet been started)

Call Install-Lab

The current lab contains only a domain controller and a member machine, so the following parameters for Install-Lab are needed:

  • NetworkSwitches
    This creates the virtual switch. By default, the virtual switch is an internal switch, so the virtual machines cannot connect to any network resources outside of the lab and the host.
  • BaseImages
    AutomatedLab creates one base image for every operating system. All virtual machines use these disks as parent disks.
  • VMs
    This creates and configures each virtual machine using differencing disks.
  • Domains
    This installs all domain controllers. In one lab, it is possible to create multiple forests, multiple child domains, or domain trees, and use both RODCs and RWDCs. This task installs all specific roles for the domain controller.
  • StartRemainingMachines
    All machines defined in the lab that have not been started will be started in this task. This task is important because starting the machines triggers the unattended installation based in the XML files that AutomatedLab created for you. This unattended.xml file contains configuration for each machine, such as computer name, IP address, locale, and domain.

In the following example, Install-Lab is called three times to separate the process. However, you can start all tasks in a single call by using all the required switch parameters.

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all domains / domain controllers

Install-Lab -Domains

#Start all machines which have not yet been started

Install-Lab -StartRemainingMachines

These tasks will take a considerable amount of time, depending on the speed of you disks. When the tasks are complete, the lab is ready to use. Congratulations!

Remove a lab

Removing the small lab described in this post is quite easy. However, if a lab of 10+ machines needs to be removed, the task becomes tedious. AutomatedLab also contains a cmdlet that can help clean up all the machines used in a lab.

The Remove-Lab cmdlet first removes all the virtual machines used in a lab, then the disks, and finally the network adapter. This enables you to sequentially perform lab installations first, then perform tests in the lab, remove the lab, install a new lab, perform tests, and so on.

Of course, checkpoints that enable you to revert are also available. One of the later blog posts in this series will show how you can take advantage of this functionality.

The full script

This is how the full script should look like after putting all the pieces together:

$start = Get-Date

$labSources = 'E:\LabSources' #here are the lab sources

$vmDrive = 'D:' #this is the drive where to create the virtual machines

$labName = 'FirstLab' #the name of the lab, virtual machine folder and network Switch

#create the folder path for the lab using Join-Path

$labPath = Join-Path -Path $vmDrive -ChildPath $labName

#create the target directory if it does not exist

if (-not (Test-Path $labPath)) { New-Item $labPath -ItemType Directory | Out-Null }

New-LabDefinition -Path $labPath -VmPath $labPath -Name $labName -ReferenceDiskSizeInGB 60

Add-LabVirtualNetworkDefinition -Name $labName -IpAddress 192.168.81.1 -PrefixLength 24

Add-LabDomainDefinition -Name test1.net -AdminUser administrator -AdminPassword Password1

Add-LabIsoImageDefinition -Name Server2012R2 -Path $labSources\ISOs\en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso –IsOperatingSystem

$installationCredential = New-Object PSCredential('Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force))

$role = Get-LabMachineRoleDefinition -Role RootDC -Properties @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Add-LabMachineDefinition -Name S1DC1 `

-MemoryInMb 512 `

-Network $labName `

-IpAddress 192.168.81.10 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Roles $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Add-LabMachineDefinition -Name S1Server1 `

-MemoryInMb 512 `

-Network $labName `

-IpAddress 192.168.81.20 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Export-LabDefinition –Force -ExportDefaultUnattendedXml

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all domains / domain Controllers

Install-Lab -Domains

#Start all machines which have not yet been started

Install-Lab -StartRemainingMachines

$end = Get-Date

Write-Host "Setting up the lab took $($end - $start)"

What’s next?

The next post describes how checkpoints work in AutomatedLab and how to define your own post-installation activities to help you customize your lab even further.

~Raimund and Per

Way cool! Join us tomorrow for more goodies by Raimund and Per.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Create System Restore Point by using PowerShell

$
0
0

Summary: Use Windows PowerShell to quickly create a system restore point.

Hey, Scripting Guy! Question How can I use Windows PowerShell to roll back the settings if the changes I make mess up my computer?

Hey, Scripting Guy! Answer Open the Windows PowerShell console with Admin rights, and use the Checkpoint-Computer cmdlet.
           Specify a good description of the checkpoint and the restore point type, for example:

Checkpoint-Computer -Description 'everything fine' -RestorePointType modify_settings


AutomatedLab Tutorial Part 3: Working with Predefined Server Roles

$
0
0

Summary: Learn how to predefine server roles and switch the operating system to Windows 10 Technical Preview in your AutomatedLab environment.

Microsoft Scripting Guy, Ed Wilson, is here. Today brings us Part 3 in a series by Microsoft PFEs, Raimund Andree and Per Pedersen. The last blog post explained how to create a small lab environment using AutomatedLab. This post explains how to extend the lab you created by predefining server roles and switching the operating system to Windows 10 Technical Preview.

To catch up, read:

Installation

If you have a version of AutomatedLab that is earlier than AutomatedLab 2.5, please uninstall it and install the latest version. You can find what you need on Microsoft TechNet: AutomatedLab.

The installation process for AutomatedLab is explained in AutomatedLab Tutorial Part 2: Create a Simple Lab.

Prerequisites for extending the lab

Today our goal is to extend the lab we created in Part 2 of this series. The steps described in Part 2 must be completed before you attempt this process.

Adding operating systems and products requires that the respective ISO images are available on the Hyper-V host. To get a list of available operating systems, use the  Get-LabAvailableOperatingSystems cmdlet. When you point to the LabSources folder, the cmdlet will mount each ISO found in the folder hierarchy and return the available operating systems.

Image of command output

Note  The output shows that the operating system name you are going to use later is 'Windows Server vNext SERVERDATACENTER'.

The following ISO images need to be downloaded to the \LabSources\ISOs folder:

  • Windows Server Technical Preview (x64) - DVD (English)
  • Exchange Server 2013 with Service Pack 1 (x64)
  • SQL Server 2012 R2

Later, the operating system for all the machines in the lab needs to be changed to have AutomatedLab install Windows 10 Technical Preview. 

Remove the previous lab

If the test lab that we created in the previous post still exists on the Hyper-V host, it needs to be removed. This is required because the domain controller should run on Windows 10 and the Active Directory forest needs to be re-created.

To remove the existing lab, open an elevated command prompt in Windows PowerShell and run the following command:

Remove-Lab –Path <LabPath>\Lab.xml

Warning  This command removes the virtual machines running Hyper-V, the VHDs, and the virtual network switch. There is no way to undo this change.

The Remove-Lab cmdlet is also quite helpful if you have to perform tests multiple times and you do not want to use checkpoints. Creating a lab over and over again requires calling the lab build script. Removing a large lab with many virtual machines can be quite a boring task and fault prone (for example, forgetting to remove a virtual machine or VHD). The more you use AutomatedLab, the more you will become accustomed to removing an entire lab because the rebuild takes only minutes.

Set up the new lab

To set up the new lab, first add the required ISOs to the lab. AutomatedLab needs to know about the ISO for Windows Server Technical Preview. The cmdlet Add-LabIsoImageDefinition adds the definition to the lab. It is important to flag the image as IsOperatingSystem. The name of the image does not matter, but it needs to be unique.

Add-LabIsoImageDefinition -Name Win10 -Path $labSources\ISOs\en_windows_server_technical_preview_x64_dvd_5554304.iso –IsOperatingSystem

The next ISOs that need to be added are the Exchange Server 2013 and the SQL Server 2012 images:

Add-LabIsoImageDefinition -Name Exchange2013 -Path $labSources\ISOs\mu_exchange_server_2013_with_sp1_x64_dvd_4059293.iso

Add-LabIsoImageDefinition -Name SQLServer2012 -Path $labSources\ISOs\en_sql_server_2012_standard_edition_with_sp1_x64_dvd_1228198.iso

Change the operating system

The lab that we created in Part 2 contains two machines: S1DC1 and S1Server1. We need to change the operating system of the two machines in the existing lab. This is a simple find and replace operation. Replace the string 'Windows Server 2012 R2 SERVERDATACENTER' with 'Windows Server vNext SERVERDATACENTER'.

The definition of the domain controller looks like this:

$role = Get-LabMachineRoleDefinition -Role RootDC -Properties @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Add-LabMachineDefinition -Name S1DC1 `

-MemoryInMb 512 `

-Network $labName `

-IpAddress 192.168.81.10 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Roles $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

Exchange Server role

The easiest way to configure machines in AutomatedLab is by using roles. However, assigning a role to a machine is not mandatory. You can also configure things manually. You can find a list of available roles that are available in AutomatedLab in AutomatedLab Tutorial Part 2: Create a Simple Lab.

To add a server running Exchange Server 2013 to the current lab, the respective role needs to be selected. Use the Get-LabMachineRoleDefinition cmdlet for that by providing the role and additional information as a hash table.

A basic Exchange Server 2013 installation needs the following additional information:

  • OrganizationName
    The name assigned to the new Exchange Server organization.
  • DependencySourceFolder
    Exchange Server 2013 has some prerequisites that need to be installed. AutomatedLab needs to know where to find these files.

If you forget to specify these values, the AutomatedLab validator creates an error, and the installation will not start.

To name the organization 'TestOrg' and point to the SoftwarePackages folder in LabSources (the folder that contains all AutomatedLab resources), use the following command:

$role = Get-LabMachineRoleDefinition -Role Exchange -Properties @{ OrganizationName = 'TestOrg'; DependencySourceFolder = "$labSources\SoftwarePackages" }

Exchange Server definition

The Exchange Server role is now stored in a variable and the next task is to define the new machine that is going to run Exchange Server. Simply copy one of the servers to the clipboard and paste the machine under the S1Server1.

Remember to give the new machine a new name and IP address. We also recommend that you give the Exchange Server more RAM. Finally, the previously created role needs to be assigned to this machine. Now you have the full server definition for Exchange Server 2013 running on Windows 10 Technical Preview. Here is the script:

Add-LabMachineDefinition -Name S1Ex1 `

-MemoryInMb 4096 `

-Network $labName `

-IpAddress 192.168.81.21 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Role $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

SQL Server 2012 role and server definition

AutomatedLab does a standard installation for SQL Server 2012. There are no customizations supported yet. Therefore, defining the role for SQL Server 2012 is easier, and it does not take additional parameters. Defining the SQL Server is also a mere copy and paste:

$role = Get-LabMachineRoleDefinition -Role SQLServer2012

Add-LabMachineDefinition -Name S1Sql1 `

-MemoryInMb 4096 `

-Network $labName `

-IpAddress 192.168.81.22 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Role $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

Note  See the predefined PostInstallationActivity for SQL Server. It installs the common sample databases, Northwind and Pubs.

Export the lab

After configuring all lab definitions, you can export the lab configuration by using the Export-LabDefinition cmdlet. This cmdlet creates two XML files in the directory D:\FirstLab. These files contain the configuration of the lab, which makes them persistent.

Export-LabDefinition –Force

Note  The Force switch overwrites existing files without asking for permission.

Installation process

All the lab definitions are complete. The installation of the lab is triggered with the Install-Lab cmdlet. But first the lab that was exported to XML needs to be imported. When a lab definition is importing, it runs through validators that to ensure that it is consistent and it can be installed. The next four lines are doing the actual hard work.

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all Domains / Domain Controllers

Install-Lab -Domains

#Start the SQL Server 2012 Installation

Install-Lab –SQLServer2012

#Start the Exchange 2013 Installation

Install-Lab -Exchange2013

#Start all machines which have not yet been started

Install-Lab -StartRemainingMachines

Note  For details about the installation process, refer to the previous two posts in this series.

Create checkpoints

AutomatedLab comes with an easy solution to manage checkpoints for multiple machines. There is no need to handle checkpoints by using the GUI in Hyper-V or by creating Windows PowerShell loops.

The Checkpoint-LabVM cmdlet can take checkpoints of a single machine or multiple machines (like the Hyper-V cmdlet). It can also take checkpoints of the whole lab, which is accomplished by using the switch parameter All, for example:

Checkpoint-LabVM -All -SnapshotName 'InstallationDone'

To restore from the checkpoint, simply use the following command:

Restore-LabVMSnapshot -All -SnapshotName 'InstallationDone'

You can also restore specific checkpoint from all machines by using the Restore-LabVMSnapshot cmdlet:

Restore-LabVMSnapshot -All -SnapshotName '<SomeName>'

Note  When you call Checkpoint-LabVM, all virtual machines in the lab are paused before taking the checkpoints. This makes sure that checkpoints on all machines are captured at the same time, which is important for Active Directory replication, for instance. However, there can be a gap of one or two seconds, so this feature may not be suitable for all products or scenarios.

Remove a lab

Removing the small lab described in this post is quite easy. However, if a lab of 10+ machines needs to be removed, the task becomes tedious. AutomatedLab also contains a cmdlet that can help clean up all the machines used in a lab.

The Remove-Lab cmdlet first removes all the virtual machines used in a lab, then the disks, and finally the network adapter. This enables you to sequentially perform lab installations first, then perform tests in the lab, remove the lab, install a new lab, perform tests, and so on.

Of course, checkpoints that enable you to revert are also available. One of the later blog posts in this series will show how you can take advantage of this functionality.

The full script

This is how the full script should look after putting all the pieces together.

$start = Get-Date

$labSources = 'E:\LabSources' #here are the lab sources

$vmDrive = 'D:' #this is the drive where to create the VMs

$labName = 'FirstLab' #the name of the lab, VM folder and network Switch

#create the folder path for the lab using Join-Path

$labPath = Join-Path -Path $vmDrive -ChildPath $labName

#create the target directory if it does not exist

if (-not (Test-Path $labPath)) { New-Item $labPath -ItemType Directory | Out-Null }

New-LabDefinition -Path $labPath -VmPath $labPath -Name $labName -ReferenceDiskSizeInGB 60

Add-LabVirtualNetworkDefinition -Name $labName -IpAddress 192.168.81.1 -PrefixLength 24

Add-LabDomainDefinition -Name test1.net -AdminUser administrator -AdminPassword Password1

Add-LabIsoImageDefinition -Name Win10 -Path $labSources\ISOs\en_windows_server_technical_preview_x64_dvd_5554304.iso –IsOperatingSystem

Add-LabIsoImageDefinition -Name Exchange2013 -Path $labSources\ISOs\mu_exchange_server_2013_with_sp1_x64_dvd_4059293.iso

Add-LabIsoImageDefinition -Name SQLServer2012 -Path $labSources\ISOs\en_sql_server_2012_standard_edition_with_sp1_x64_dvd_1228198.iso

$installationCredential = New-Object PSCredential('Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force))

$role = Get-LabMachineRoleDefinition -Role RootDC -Properties @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Add-LabMachineDefinition -Name S1DC1 `

-MemoryInMb 512 `

-Network $labName `

-IpAddress 192.168.81.10 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Roles $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

Add-LabMachineDefinition -Name S1Server1 `

-MemoryInMb 512 `

-Network $labName `

-IpAddress 192.168.81.20 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

$role = Get-LabMachineRoleDefinition -Role Exchange -Properties @{ OrganizationName = 'TestOrg'; DependencySourceFolder = "$labSources\SoftwarePackages" }

Add-LabMachineDefinition -Name S1Ex1 `

-MemoryInMb 4096 `

-Network $labName `

-IpAddress 192.168.81.21 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Role $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

$role = Get-LabMachineRoleDefinition -Role SQLServer2012

Add-LabMachineDefinition -Name S1Sql1 `

-MemoryInMb 4096 `

-Network $labName `

-IpAddress 192.168.81.22 `

-DnsServer1 192.168.81.10 `

-DomainName test1.net `

-IsDomainJoined `

-Role $role `

-InstallationUserCredential $installationCredential `

-ToolsPath $labSources\Tools `

-OperatingSystem 'Windows Server vNext SERVERDATACENTER'

Export-LabDefinition -ExportDefaultUnattendedXml –Force

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all Domains / Domain Controllers

Install-Lab -Domains

#Start the SQL Server 2012 Installation

Install-Lab –SQLServer2012

#Start the Exchange 2013 Installation

Install-Lab -Exchange2013

#Start all machines which have not yet been started

Install-Lab –StartRemainingMachines

$end = Get-Date

Write-Host "Setting up the lab took $($end - $start)"

What’s next?

The next post in the series describes, how AutomatedLab can be used to set up a public key infrastructure (PKI).

~Raimund and Per

Thank you, Raimund and Per. Please tune in tomorrow for Part 4.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Take Snapshot of Virtual Machine

$
0
0

Summary: Use Windows PowerShell to take a snapshot of a virtual machine prior to making changes.

Hey, Scripting Guy! Question How can I use Windows PowerShell to take a snapshot of a virtual machine prior to making changes to it?

Hey, Scripting Guy! Answer Use the CheckPoint-VM cmdlet, for example:

Checkpoint-VM -Name DC1 -SnapshotName BeforeUpdate

AutomatedLab Tutorial Series Part 4: Install a Simple PKI Environment

$
0
0

Summary: Learn how to easily deploy a PKI environment by using AutomatedLab.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back Microsoft PFEs, Raimund Andree and Per Pedersen, and their series about AutomatedLab. Read their previous posts here:

This blog post shows how to use AutomatedLab to easily deploy a public key infrastructure (PKI) environment. The PKI environment will be a single server, which is installed with the certification authority (CA) role. Subsequently, you will be able to request and issue certificates to all computers and users in your lab. Also, if you want to sign Windows PowerShell scripts in the lab, a certificate can be created for this purpose.

Installation

If you have a version of AutomatedLab that is earlier than AutomatedLab 2.5, please uninstall it and install the latest version. You can find what you need on Microsoft TechNet: AutomatedLab.

The installation process for AutomatedLab is explained in AutomatedLab Tutorial Part 2: Create a Simple Lab.

Prerequisites for AutomatedLab

AutomatedLab requires Hyper-V and Windows PowerShell 3.0 (or higher). Hence, you need one of the following operating systems on the host where you want to install the lab:

  • Windows Server 2012 R2
  • Windows Server 2012
  • Windows 8.1
  • Windows 8

     Note  Although Windows Server 2008 R2 could work, and Windows 10 hasn’t been tested, at this time, we recommend
     that you use one of the listed operating systems on the host machine.

AutomatedLab scripts need to be running directly on the host where the lab environment (the virtual machines) will be created.

Prerequisites for installing a CA in the lab

The only prerequisite for the certification authority (CA) installation is that the server needs to be running Windows Server 2012 R2 or Windows Server 2012.

There are two types of certification authorities: Stand-alone or Enterprise. There are two major differences for the installation for the Enterprise CA:

  • It requires the server to be domain joined.
  • The account used during the installation of the role needs to have Domain Admin permissions.

In this post, we install an Enterprise CA.

Define the lab machines

For simplicity, we will use two machines. One machine will be become a domain controller and the other machine will become the certification authority server. The domain controller is defined as follows:

$role = Get-LabMachineRoleDefinition -Role RootDC `

   -Properties @{DomainFunctionalLevel = "Win2012R2"

                 ForestFunctionalLevel = "Win2012R2"}

Add-LabMachineDefinition -Name S1DC1 -MemoryInMb 512 `

    -Network $labNetworkName -IpAddress 192.168.81.10 -DnsServer1 192.168.81.10 `

    -DomainName test1.net -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

     Note  Refer to the previous posts in this series for details about defining machine roles.

The certification authority (like the domain controller) is a role in AutomatedLab, and this role needs to be specified when defining the lab machine. The role is selected by using the Get-LabMachineRoleDefinition cmdlet as follows:

$role = Get-LabMachineRoleDefinition -Role CaRoot

Next, the lab machine can defined using the selected role.

Add-LabMachineDefinition -Name S1CA1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.11 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

After defining the lab machines, start the installation of the lab as usual:

  1. Export the lab definition.
  2. Import it (which also validates the configuration and reports any errors).
  3. Start the installation, which will create the virtual network, create base images, create virtual machines running Hyper-V, and install the roles found in the lab.

The script looks like this:

Export-LabDefinition -Force -ExportDefaultUnattendedXml

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

Install-Lab -Domains

At this point, the domain controller is installed and ready. Now the installation of the certification authority needs to be started. This is done like this:

Install-Lab -CA

Notice that you do not need to instruct the Install-Lab cmdlet about how to install the CA and how it should be configured. This is done automatically.

The lab is ready with a domain controller (hosting a domain, of course) and with an Enterprise CA. You can request and issue certificates from the subordinate CA for use in your lab.

Customize the configuration of the CA

The first installation was very easy because the entire configuration is automatic when you call Install-Lab -CA. Nowlet’s try installing a PKI environment where we define some of the CA configuration. Even though the default installation will work in the majority of situations for a test lab, it could be necessary to specify certain parts of the configuration for the PKI environment.

First the current lab needs to be removed:

Remove-Lab -Path <path to the lab.xml file>

The Remove-Lab cmdlet turns off and removes the virtual machines, the disks, and the network adapter.

The domain and the domain controller need to be defined as they were previously:

$role = Get-LabMachineRoleDefinition -Role RootDC `

-Properties @{DomainFunctionalLevel = "Win2012R2"

              ForestFunctionalLevel = "Win2012R2"}

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

When you define the CA, you have the option of specifying configuration parameters. Take a look at the following:

$role = Get-LabMachineRoleDefinition `

  -Role CaRoot @{CACommonName = "MySpecialRootCA1"

                 KeyLength = “2048”

                 ValidityPeriod = "Weeks"

                 ValidityPeriodUnits = "4"}

The lab machine can be defined by using the selected role with the customized configuration. This command is the same as we used previously—only the content of the $role variable is different:

Add-LabMachineDefinition -Name S1CA1 -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.11 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

                -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

As previously, perform the actual installation of the lab by using:

Export-LabDefinition -Force -ExportDefaultUnattendedXml

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

Install-Lab –Domains

Install-Lab -CA

Notice that the command for installing the CA is almost the same as we used previously. The difference is that now the parameters specified in the Role parameter will be passed to the CA installation code.

This time, the lab is ready with a domain controller (hosting a domain, of course) and with an Enterprise CA configured by using our customized parameters.

To see all the parameters that are possible to specify when you install a CA server in AutomatedLab, you can type the following:

Get-Help Install-LWLabCAServers -Parameter *

Or to see only the names of the parameters (without the detailed information), type:

(Get-Command Install-LWLabCAServers).Parameters.Keys

     Note  Changing the parameters for the CA servers requires that you know about how the corresponding configuration
     parameters for CA servers are working. Hence, it is only recommended to customize parameters if you know about
     the PKI and have the need to customize parameters.

Supported PKI configurations in AutomatedLab

AutomatedLab supports one-tier and two-tier deployments for the PKI. This means that you can solely deploy a root CA, or you can deploy a root CA with a subordinate CA to this root CA.

AutomatedLab supports only PKI deployments in the same Active Directory forest. That is, deployment of a root CA in one Active Directory forest and a subordinate CA in another Active Directory forest is not supported.

Use the CA in the lab

Now that your CA server is set up, you can request a certificate for a user, request a certificate for a computer, or configure automatic enrollment.

Request a certificate for a user

To make a manual request, we can use the Web Enrollment interface on the CA server. Open a browser on the CA server, and type http://localhost/CertSrv. The following website is displayed:

Image of site

  1. Open Internet Options, and on the Security tab, click Custom level, and Enable the setting called Initialize and script ActiveX controls not marked as safe for scripting.
  2. Click Request a certificate, and then click User Certificate.

Image of webpage

3. The following confirmation message appears. Click Yes.

Image of warning

4. Click Submit, and then click Yes.

Image of webpage

5. Click Install this certificate. The following message appears:

Image of message

6. Press the Windows key + R, type certmgr.msc, and press ENTER. The Certificate Manager console appears.

7. To see the certificate listed, select Certificates - Current User\Personal\Certificates.

Image of menu

8. Double-click the certificate name.

Image of certificate

Congratulations! You now have a user certificate issued to contoso\Administrator installed locally on the CA server.

Request a certificate for a computer

It is also possible to request certificates for computers. We will make a manual request by using the certificates console on the CA server.

  1. On the CA server, press Windows key + R, type mmc, and press ENTER.
  2. Click the File menu, and then click Add/Remove Snap-in.
  3. Click Certificates, and click on Add.

Image of menu

4. Select Computer account and click Next.

Image of menu

5. Click Finish. The root CA (license to be a certification authority) is present.

Image of menu

6. Under Certificates (Local Computer), double-click the Personal folder, right-click the Certificates folder, and click All Tasks, and then click Request New Certificate.

Image of menu

7. Click Next on the next two screens that appear.

8. Select the check box next to Computer,and click Enroll.

Image of menu

9. Click Details, and then click View Certificate.

Image of menu

Congratulations! Now you have issued a computer certificate for the CA server!

Image of certificate

Enable automatic enrollment of certificates

Issuing the user and computer certificates was good fun. However, it will not be so much fun if you deploy a lab with 10+ computers and 50+ users and you need to enroll 60 certificates manually! Instead, you can configure automatic enrollment by following these steps.

  1. On the CA server, press Windows key + R, type certtmpl.msc, and press ENTER.
  2. All the Certificate Templates Console appears. These templates represent the types of certificates that are possible to be issued by the CA server.

Image of menu

3. Right-click the Computer certificate template, and click Duplicate Template.

4. In the Template display name field, type Computer Auto Enroll.

Image of menu

5. Click the Security tab, and then click Domain Computers.

6. In the Permissions for Domain Computers area, select the check boxes to allow Read, Enroll, and Autoenroll.

7. Add Domain Controllers with the same permissions.

8. Perform the same procedure for the User certificate template where permissions for Domain Users and the Administrator are modified to allow Read, Enroll, and Autoenroll.

9. Additionally, when duplicating the User certificate template...

    In User Autoenroll Properties, click the Subject Name tab, and clear the following check boxes:

    • Include e-mail name inalternate subject name
    • E-mail name

Image of menu

10. On the CA server, press Windows key + R, and type certsrv.msc, and press ENTER.

Image of menu

11. Right-click Certificate Templates, click New, and click Certificate Template to Issue.

Image of menu

12. Click the newly created certificate template, Computer Auto Enroll, and then click OK.

Image of menu

Follow the same procedure for the User Auto Enroll certificate template.

Configure a Group Policy setting

The certificate templates on the CA server are in place, and they will be issued by request. To make all computers and users automatically request certificates, you configure a Group Policy setting.

  1. To open the Group Policy Management Tool, on the domain controller, press Windows key + R, type gpmc.msc, and then press ENTER.
  2. Click Forest: test1.net> Domains> test1.net.

Image of menu

3. Right-click the Default Domain Policy, and click Edit.

Image of menu

4. Click Default Domain Policy> Computer Configuration> Policies> Windows Settings> Security Settings> Public Key Policies.

5. In the Object Type area, double-click Certificate Services Client - Auto-Enrollment.

Image of menu

6. Change the Configuration Model to Enabled, then select both check boxes and click OK.

Image of menu

7. Perform the exact same changes for users under Default Domain Policy> User Configuration> Policies> Windows Settings> Security Settings> Public Key Policies.

8. To test the auto enrollment, press Windows key + R, type mmc, and press ENTER.

9. Click the File menu and click Add or Remove Snap-ins.

10. Click Certificates, and then click Add.

Image of menu

11. Select Computer account, and then click Next.

Image of menu

12. Select Local computer, and then click Finish.

Image of menu

13. Click the Certificates tab again, and then click Add. This time, select My user account, and click Finish.

Image of menu

14. Click Certificate (Local Computer)> Personal> Certificates. The list contains three certificates, but none of these are issued from the certificate templates we created earlier.

Image of menu

15. Click Certificate (Current User)> Personal> Certificates. Only one self-signed certificate is present. There are no certificates from our CA server.

Image of menu

16. To update Group Policy, open a Windows PowerShell console and type:

gpupdate /force

A certificate has been automatically issued to the domain controller from our new certificate template. Also, user certificate has been issued!

Image of menu

Congratulations! When new computers are joined to the domain, they will automatically get a certificate.

The full script

$start = Get-Date

#Some definitions about folder paths

#This folder contains two sub folders

# -ISOs - Stores all the DVD images

# -PostInstallationActivities - any scripts to customize the environment after installation

$labSources = 'E:\LabSources'

$vmDrive = 'D:'

$labName = 'PKISmall1'

#this folder stores the XML files that contain all the information about the lab

$labPath = "$vmDrive\$labName"

#create the target directory if it does not exist

if (-not (Test-Path $labPath)) { New-Item $labPath -ItemType Directory | Out-Null }

#create an empty lab template and define where the lab XML files and the VMs will be stored

New-LabDefinition -Path $labPath -VmPath $labPath -Name $labName -ReferenceDiskSizeInGB 60

#make the network definition

Add-LabVirtualNetworkDefinition -Name $labName -IpAddress 192.168.81.1 -PrefixLength 24

#and the domain definition with the domain admin account

Add-LabDomainDefinition -Name test1.net -AdminUser administrator -AdminPassword Password1

#these images are used to install the machines

Add-LabIsoImageDefinition -Name Server2012 -Path $labSources\ISOs\en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso -IsOperatingSystem

#these credentials are used for connecting to the machines. As this is a lab we use clear-text passwords

$installationCredential = New-Object PSCredential('Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force))

#the first machine is the root domain controller. Everything in $labSources\Tools get copied to the machine's Windows folder

$role = Get-LabMachineRoleDefinition -Role RootDC @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -IsDomainJoined `

    -DomainName test1.net `

    -Network $labName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER' `

    -Roles $role

#the second will be a member server configured as Root CA server. Everything in $labSources\Tools get copied to the machine's Windows folder

$role = Get-LabMachineRoleDefinition -Role CaRoot

Add-LabMachineDefinition -Name S1CA1 `

    -MemoryInMb 512 `

    -IsDomainJoined `

    -DomainName test1.net `

    -Network $labName `

    -IpAddress 192.168.81.20 `

    -DnsServer1 192.168.81.10 `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER' `

    -Roles $role

#This all has created the lab configuration in memory. Next step is to export it to XML. You could have made the

#lab definitions in XML as well or can do modifications in the XML if this seems easier.

Export-LabDefinition -Force -ExportDefaultUnattendedXml

#Set trusted hosts to '*' and enable CredSSP

Set-LabHostRemoting

#Now the XML files needed to be reimported. Some basic checks are done for duplicate IP addresses, machine names, domain

#membership, etc. If this reports errors please run "Test-LabDefinition D:\LabSettings\Lab.xml" for more information

Import-Lab -Path (Get-LabDefinition).LabFilePath

#Now the actual work begins. First the virtual network adapter is created and then the base images per OS

#All VMs are diffs from the base.

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all domains / domain controllers

Install-Lab -Domains

#Install CA server(s)

Install-Lab -CA

#Start all machines what have not yet started

Install-Lab -StartRemainingMachines

$end = Get-Date

Write-Host "Setting up the lab took $($end - $start)"

What’s next?

The next post describes how to install a typical PKI hierarchy the way it is typically set up in production. This means there will be two CAs—one Stand-alone (the root CA) and one Enterprise (the subordinate CA).

~ Raimund and Per

Be sure to join us tomorrow when Raimund and Per bring us Part 5.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Troubleshoot Get-VM Cmdlet

$
0
0

Summary: Learn about a quick troubleshooting tip for the Get-VM cmdlet.

Hey, Scripting Guy! Question I am using the Get-VM cmdlet to find information about virtual machines on my local laptop, but it does not
           appear to do anything. What could be the matter?

Hey, Scripting Guy! Answer The Get-VM cmdlet must be run with Admin permissions. But unfortunately, it does not tell you that when
           run in a non-elevated Windows PowerShell console. To launch the Windows PowerShell console with
           Admin permissions, right-click the Windows PowerShell console icon and select Run as Administrator 
           from the action menu.

AutomatedLab Tutorial Series Part 5: Install CA Two-Tier Hierarchy for PKI

$
0
0

Summary: Microsoft PFEs, Raimund Andree and Per Pedersen, talk about using Windows PowerShell to automate the deployment of a PKI environment as it would typically be deployed in a production environment.

Microsoft Scripting Guy, Ed Wilson, is here. Today is Part 5 of a series by Microsoft PFEs, Raimund Andree and Per Pedersen. Read their previous posts here:

Following up on Part 4, this post explains how to deploy a public key infrastructure (PKI) environment as it would typically be deployed in a production environment. The PKI environment will be deployed using two servers. One server will be a root certification authority (CA) and the other will be a subordinate CA server.

The root CA will be in a workgroup and the subordinate CA will be in a domain. Subsequently, you will be able to request and issue certificates from the subordinate CA server to all computers and users in your lab. Also, if you want to sign Windows PowerShell scripts in the lab, a certificate can be created for this purpose.

Installation

If you have a version of AutomatedLab that is earlier than AutomatedLab 2.5, please uninstall it and install the latest version. You can find what you need on Microsoft TechNet: AutomatedLab.

The installation process for AutomatedLab is explained in AutomatedLab Tutorial Part 2: Create a Simple Lab.

Prerequisites for AutomatedLab

AutomatedLab requires Hyper-V and Windows PowerShell 3.0 (or higher). Hence, you need one of the following operating systems on the host where you want to install the lab:

  • Windows Server 2012R2
  • Windows Server 2012
  • Windows 8.1
  • Windows 8

    Note  Although Windows Server 2008 R2 could work and Windows 10 hasn’t been tested, at this time, we recommend that you use one of the listed operating systems on the host machine.

AutomatedLab scripts need to be running directly on the host where the lab environment (the virtual machines) will be created.

For more information about the overall installation process, refer to the previous posts.

Prerequisites for installing CA servers in the lab

The only prerequisite for the certification authority (CA) installation is that the server needs to be running Windows Server 2012 R2 or Windows Server 2012.

Defining the lab machines

For the subordinate server, a domain is needed. Hence, a domain controller needs to be installed. The installation is the same as in Part 4. The domain and the domain controller are defined as follows:

$role = Get-LabMachineRoleDefinition -Role RootDC `

   -Properties @{DomainFunctionalLevel = "Win2012R2"

                 ForestFunctionalLevel = "Win2012R2"}

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Then the root CA needs to be installed. This server will be a stand-alone CA server—it is not joined to any domain.

The certification authority (like the domain controller) is a role in AutomatedLab, and this role needs to be specified when defining the lab machine. The role is selected by using the Get-LabMachineRoleDefinition cmdlet as follows:

$role = Get-LabMachineRoleDefinition -Role CaRoot

The lab machine can be defined by using the selected role, for example:

Add-LabMachineDefinition -Name S1ROOTCA1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.11 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Next, the subordinate CA server needs to be defined. This is the command:

$role = Get-LabMachineRoleDefinition -Role CaSubordinate

The lab machine can be defined by using the selected role. This command is the same as defining the root CA—only the content of the $role variable is different.

Add-LabMachineDefinition -Name S2SUBCA1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.12 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

When all the machines have been defined, start the installation of the lab as usual:

  1. Export the lab definition.
  2. Import it (which also validates the configuration and reports any errors).
  3. Start the installation, which will create the virtual network, create base images, create virtual machines running Hyper-V, and install the roles found in the lab.

The script looks like this:

Export-LabDefinition -Force -ExportDefaultUnattendedXml

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

Install-Lab -Domains

At this point, the domain controller is installed and ready. Now the installation of the certification authority needs to be started. This is done like this:

Install-Lab -CA

Notice that you do not need to instruct the Install-Lab cmdlet about how to install the CA and how it should be configured. This is done automatically.

The lab is ready with a domain controller (hosting a domain, of course) and with two Enterprise CA servers. Now, you can request and issue certificates (from the subordinate CA) for use in your lab!

Customize the configuration of the CA servers

The first installation was very easy because the entire configuration is automatic when you call Install-Lab -CA. Nowlet’s try installing a PKI environment where we define some of the CA configuration. Even though the default installation will work in the majority of situations for a test lab, it could be necessary to specify certain parts of the configuration for the PKI environment.

First of all, the current lab needs to be removed:

Remove-Lab -Path <path to the lab.xml file>

The Remove-Lab cmdlet turns off and removes the virtual machines, the disks, and the network adapter.

The domain and the domain controller need to be defined as previously:

$role = Get-LabMachineRoleDefinition -Role RootDC `

-Properties @{DomainFunctionalLevel = "Win2012R2"

              ForestFunctionalLevel = "Win2012R2"} 

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

When you define the CA, you have the option of specifying configuration parameters. Take a look at the following:

$role = Get-LabMachineRoleDefinition `

  -Role CaRoot @{CACommonName = "MySpecialRootCA1"

                 KeyLength = “4096”

                 ValidityPeriod = "Year"

                 ValidityPeriodUnits "20"}

The lab machine can be defined by using the selected role with the customized configuration.

Add-LabMachineDefinition -Name S1ROOTCA1 -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.11 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Lastly, the machine for subordinate CA needs to be defined. This command is the same as we used previously—only the content of the $role variable is different:

$role = Get-LabMachineRoleDefinition `

  -Role CaRoot @{CACommonName = "MySpecialRootCA1"

                 KeyLength = “4096”

                 ValidityPeriod = "Year"

                 ValidityPeriodUnits "20"}

To define (add) the machine, type the following command. This command is the same as the command we used to define the root CA—only the content of the $role variable is different.

Add-LabMachineDefinition -Name S1SUBCA1 -MemoryInMb 512 `

    -Network $labNetworkName `

    -IpAddress 192.168.81.11 `

    -DnsServer1 192.168.81.10 `

    -DomainName test1.net `

    -IsDomainJoined `

    -Roles $role `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER'

Use the following script to perform the actual installation of the lab:

Export-LabDefinition -Force -ExportDefaultUnattendedXml

Import-Lab -Path (Get-LabDefinition).LabFilePath

Install-Lab -NetworkSwitches -BaseImages -VMs

Install-Lab –Domains

Install-Lab -CA

Also notice that the command for installing the CAs is almost the same as the command we previously used. The difference is that now the parameters specified in the Role parameters will be passed to the CA installation script.

The lab is ready with a domain controller (hosting a domain) and with a complete two-tier PKI hierarchy configured by using our customized parameters.

To see all the parameters that are possible to specify when you install a CA server in AutomatedLab, you can type the following:

Get-Help Install-LWLabCAServers -Parameter *

Or to see only the names of the parameters (without the detailed information), type:

(Get-Command Install-LWLabCAServers).Parameters.Keys

    Note  Changing the parameters for the CA servers requires that you know about how the corresponding configuration
    parameters for CA servers are working. Hence, it is only recommended to customize parameters if you know about
    the PKI and have the need to customize parameters.

Supported PKI configurations in AutomatedLab

AutomatedLab supports 1-tier and 2-tier deployments for the PKI. This means that you can solely deploy a root CA, or you can deploy a root CA with a subordinate CA to this root CA.

AutomatedLab supports only PKI deployments in the same Active Directory forest. That is, deployment of a root CA in one Active Directory forest and a subordinate CA in another Active Directory forest is not supported.

The full script

$start = Get-Date

#Some definitions about folder paths

#This folder contains two sub folders

# -ISOs - Stores all the DVD images

# -PostInstallationActivities - any scripts to customize the environment after installation

$labSources = 'E:\LabSources'

$vmDrive = 'D:'

$labName = 'PKITypical1'

#this folder stores the XML files that contain all the information about the lab

$labPath = "$vmDrive\$labName"

#create the target directory if it does not exist

if (-not (Test-Path $labPath)) { New-Item $labPath -ItemType Directory | Out-Null }

#create an empty lab template and define where the lab XML files and the VMs will be stored

New-LabDefinition -Path $labPath -VmPath $labPath -Name $labName -ReferenceDiskSizeInGB 60

#make the network definition

Add-LabVirtualNetworkDefinition -Name $labName -IpAddress 192.168.81.1 -PrefixLength 24

#and the domain definition with the domain admin account

Add-LabDomainDefinition -Name test1.net -AdminUser administrator -AdminPassword Password1

#these images are used to install the machines

Add-LabIsoImageDefinition -Name Server2012 -Path $labSources\ISOs\en_windows_server_2012_r2_with_update_x64_dvd_4065220.iso -IsOperatingSystem

#these credentials are used for connecting to the machines. As this is a lab we use clear-text passwords

$installationCredential = New-Object PSCredential('Administrator', ('Password1' | ConvertTo-SecureString -AsPlainText -Force))

#the first machine is the root domain controller. Everything in $labSources\Tools get copied to the machine's Windows folder

$role = Get-LabMachineRoleDefinition -Role RootDC @{ DomainFunctionalLevel = 'Win2012R2'; ForestFunctionalLevel = 'Win2012R2' }

Add-LabMachineDefinition -Name S1DC1 `

    -MemoryInMb 512 `

    -IsDomainJoined `

    -DomainName test1.net `

    -Network $labName `

    -IpAddress 192.168.81.10 `

    -DnsServer1 192.168.81.10 `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER' `

    -Roles $role

#the second will be a member server configured as Root CA server. Everything in $labSources\Tools get copied to the machine's Windows folder

$role = Get-LabMachineRoleDefinition -Role CaRoot

Add-LabMachineDefinition -Name S1ROOTCA1 `

    -MemoryInMb 512 `

    -Network $labName `

    -IpAddress 192.168.81.20 `

    -DnsServer1 192.168.81.10 `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER' `

    -Roles $role

#the thrid will be a member server configured as Subordinate CA server. Everything in $labSources\Tools get copied to the machine's Windows folder

$role = Get-LabMachineRoleDefinition -Role CaSubordinate

Add-LabMachineDefinition -Name S2SUBCA1 `

    -MemoryInMb 512 `

    -IsDomainJoined `

    -DomainName test1.net `

    -Network $labName `

    -IpAddress 192.168.81.30 `

    -DnsServer1 192.168.81.10 `

    -InstallationUserCredential $installationCredential `

    -ToolsPath $labSources\Tools `

    -OperatingSystem 'Windows Server 2012 R2 SERVERDATACENTER' `

    -Roles $role

#This all has created the lab configuration in memory. Next step is to export it to XML. You could have made the

#lab definitions in XML as well or can do modifications in the XML if this seems easier.

Export-LabDefinition -Force -ExportDefaultUnattendedXml

#Set trusted hosts to '*' and enable CredSSP

Set-LabHostRemoting

#Now the XML files needed to be reimported. Some basic checks are done for duplicate IP addresses, machine names, domain

#membership, etc. If this reports errors please run "Test-LabDefinition D:\LabSettings\Lab.xml" for more information

Import-Lab -Path (Get-LabDefinition).LabFilePath

#Now the actual work begins. First the virtual network adapter is created and then the base images per OS

#All VMs are diffs from the base.

Install-Lab -NetworkSwitches -BaseImages -VMs

#This sets up all domains / domain controllers

Install-Lab -Domains

#Install CA server(s)

Install-Lab -CA

#Start all machines what have not yet started

Install-Lab -StartRemainingMachines

$end = Get-Date

Write-Host "Setting up the lab took $($end - $start)"

What’s next?

The next post discusses how to manage software inside your lab and how to run custom tasks by leveraging the AutomatedLab infrastructure.

Thanks to Raimund and Per. This is really helpful stuff. Please join us tomorrow for Part 6.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Import Certificate

$
0
0

Summary: Use Windows PowerShell to import a certificate.

Hey, Scripting Guy! Question How can I use Windows PowerShell to automate the installation of a certificate?

Hey, Scripting Guy! Answer Use the Import-Certificate cmdlet, and specify the certificate store location and the path to
           the certificate file, for example:

Import-Certificate –filepath c:\fso\mycert.cert –certStorelocation cert:\currentuser\my

AutomatedLab Tutorial Series Part 6: Install Software and Run Custom Activities

$
0
0

Summary: Microsoft PFEs, Raimund Andree and Per Pedersen, explain how to customize your existing environment and install software to some or all machines in your lab.

Microsoft Scripting Guy, Ed Wilson, is here. Today is Part 6 of a series written by Microsoft PFEs Raimund Andree and Per Pedersen. Read their previous posts here:

In the previous blog posts, you learned how easy it is to create complex lab environments by using AutomatedLab. This post explains how to customize your existing environment and install software to some or all machines in the lab.

Installation

If you have a version of AutomatedLab that is earlier than AutomatedLab 2.5, please uninstall it and install the latest version. You can find what you need on Microsoft TechNet: AutomatedLab.

The installation process for AutomatedLab is explained in AutomatedLab Tutorial Part 2: Create a Simple Lab.

Prerequisites for extending the lab

The following examples are based on the on the lab created in AutomatedLab Tutorial Part 3: Working with Predefined Server Roles. The full script is provided at the end of this post.

For your convenience, all blog posts are also part of the AutomatedLab installation file. After you install AutomatedLab, you can find several lab scenarios in the Documents > AutomatedLab Sample Scripts folder.

Windows PowerShell remoting

AutomatedLab heavily leverages Windows PowerShell remoting. The machines running Hyper-V are created locally, but all other setup and configuration tasks are triggered on the virtual machines by using Windows PowerShell remoting.

Some details and information about how AutomatedLab uses Windows PowerShell remoting are explained in AutomatedLab Tutorial Part 1: Introduction to AutomatedLab.

Note  There is an excellent tutorial that covers most aspects of Windows PowerShell remoting. Unfortunately,
the series was written for Windows PowerShell 2.0, so features in Windows PowerShell 4.0 and
Windows PowerShell 3.0 are missing. For more information, see PowerShell 2.0 remoting guide.

Invoke-Command

Introduction to the cmdlet

One of the first cmdlets you learn about when using Windows PowerShell remoting is Invoke-Command. This cmdlet takes a script block or a path to a script file and the computer name(s) on which to run the task. Another option is providing an existing session instead of the computer name.

Note  To connect to a machine that is not part of the workstation’s domain, you need to specify credentials and put the machine into the Trusted Hosts list. (This is explained in Part 1 of this series and in the PowerShell 2.0 remoting guide).

For example, if you want to retrieve the current time from the domain controller (S1DC1), the command would look like this:

$cred = Get-Credential test1\administrator #the password is "Password1"

$command = { Get-Date }

Invoke-Command -ComputerName S1DC1 -Credential $cred -ScriptBlock $command

If you want to hard code the credential inside the script (not a good idea for production environments, but fine when working in a lab), it gets a bit more complex:

$username = 'test1\Administrator'

$password = 'Password1' | ConvertTo-SecureString -AsPlainText -Force

$cred = New-Object pscredential($username, $password)

$command = { Get-Date }

Invoke-Command -ComputerName S1DC1 -Credential $cred -ScriptBlock $command

By providing the computer name, Windows PowerShell creates a new PSSession to the remote computer (and destroys it when done) each time you use Invoke-Command. PSSessions can be created manually and can be reused. This is efficient if you want to execute many scripts or script blocks.

Persistent sessions

The cmdlet to create a persistent session is New-PSSession. You need to provide the computer name and the credentials, for example:

$username = 'test1\Administrator'

$password = 'Password1' | ConvertTo-SecureString -AsPlainText -Force

$cred = New-Object pscredential($username, $password)

$session = New-PSSession -ComputerName S1DC1 -Credential $cred

$command = { Get-Date }

Invoke-Command -Session $session -ScriptBlock $command

After the job is done, you can remove the PSSession manually by calling Remove-PSSession:

$session | Remove-PSSession

AutomatedLab makes this easier by providing the cmdlet Invoke-LabCommand.

Invoke-LabCommand

Introduction to the cmdlet

Invoke-LabCommand cares about credentials. This might not be that interesting in a small lab, but if there are multiple forests, domains, or workgroups, it can make things much easier.

You need to import the existing lab first. A validation is not required because the lab already exists.

Import-Lab D:\FirstLab\Lab.xml -NoValidation

Then you can use Invoke-LabCommand:

$command = { Get-Date }

Invoke-LabCommand -ComputerName S1DC1 -ScriptBlock $command -PassThru

The PassThru switch is required to get back the result. When doing lab installations with AutomatedLab, the result of installations such as SQL Server or Exchange Server would be too large to handle. The result is always stored in a variable and PassThru returns the variable content.

VERBOSE: The Output of the task on machine 'S1DC1' will be available in the variable 'fc1725b7-0ff0-47ea-bdc1-1833103a679c'

If you want to get the current time of all the lab machines, it is a one-liner:

Invoke-LabCommand -ComputerName (Get-LabMachine -All) -ScriptBlock { Get-Date } -PassThru

The same pattern works for scriptssimply use the FilePath parameter instead of ScriptBlock:

$data = Invoke-LabCommand -ComputerName (Get-LabMachine -All) -FilePath D:\Get-DiagnosticData.ps1 -PassThru

This could be quite a long-running operation. That’s why the Invoke-LabCommand cmdlet provides an AsJob switch (as does Invoke-Command). When AsJob is used, the actual data gathered by the script is not returned, but instead, job objects are returned. The cmdlet creates one background job per machine. Everything works in parallel and the runtime is dramatically reduced.

We recommend that you store the jobs in a variable to have a handle on them. The jobs can be piped to Wait-Job and then further to Receive-Job. In this way, Windows PowerShell waits until all jobs are finished and then retrieves the date from the job objects.

$jobs = Invoke-LabCommand -ComputerName (Get-LabMachine -All) -FilePath D:\Get-DiagnosticData.ps1 -PassThru -AsJob

$data = $jobs | Wait-Job | Receive-Job

   Note  The Get-LabMachine cmdlet has various parameter patterns. It can get machines by name or by role(s),
   or it can return all lab machines.

Persistent sessions

Invoke-LabCommand uses persistent sessions even when using the ComputerName parameter. It does not remove the session when the command is complete, but it reuses the session if called at a later time. Invoke-LabCommand reuses sessions because AutomatedLab internally tracks all opened sessions. This can be observed when looking at the VERBOSE output, which can be disabled if it is too noisy:

$Global:VerbosePreference = 'SilentlyContinue'

This part of the VERBOSE log shows that the internal worker cmdlet, New-LWPSSession, looks for existing sessions. In the following example, it found four sessions that are open to the same machine. Three of them are removed and the remaining one will be reused.

VERBOSE: Starting Installation Activity '<unnamed>'

VERBOSE: Credentials prepared for user 'test1.net\administrator'

VERBOSE: Creating session to computer 'S1DC1'

VERBOSE: New-LWPSSession Entering... (ComputerName=S1DC1,Credential=UserName: test1.net\administrator / Password: Password1)

VERBOSE: Found orphaned sessions. Removing 3 sessions: AL_96f0c611-d31a-4e82-a021-5a09fe8e8318, AL_3ec60a75-6a4a-4c9f-a34b-9d8caec09d51, AL_b24ea3d5-c264-4455-b420-cda37fa0a5f4

VERBOSE: Session AL_63ef90c2-dadc-4785-9c89-0af7e6f10ca7 is available and will be reused

Invoke-LabCommand is the ideal tool for making mass changes in your lab because:

  • You do not need to care about credentials (if the target machine is part of the lab configuration).
  • It reuses sessions for better performance.

Double-hop authentication

The default authentication protocol in a domain environment is Kerberos. Kerberos protocol does not allow a double-hop authentication, so you cannot connect from a remote machine to another machine. This is sometimes absolutely necessary.

A remote server does not get the user’s password or ticket-granting ticket (TGT, which can be compared to your passport). Instead, the server that a user connects to receives a Kerberos service ticket with the user’s security identifier (SID) and the SIDs for all groups the user is a member of. The server does not have any information to perform authentication on the user’s behalf because the credentials are not stored on the remote server.

In some situations, when a second (or double-hop authentication) is required, Windows PowerShell offers the CredSSP authentication provider. By using CredSSP, Windows PowerShell forwards the user name and password to the remote computer.

This authentication provider is disabled by default on both the server and the client, but it is enabled on all machines that are installed with AutomatedLab. To make use of it, specify the UseCredSSP switch that Invoke-LabCommand provides.

The following command results in an error message because requesting data from S1DC1 over S1Sql1 is a double-hop authentication:

Invoke-LabCommand -ComputerName S1Sql1 -ScriptBlock { dir \\S1DC1\c$ } -PassThru

To make this possible, use the UseCredSSP switch.

Invoke-LabCommand -ComputerName S1Sql1 -ScriptBlock { dir \\S1DC1\c$ } -PassThru -UseCredSsp

By default, Windows PowerShell can only forward "fresh" credentials. This requires using Get-Credential or manually creating a new PSCredential object. AutomatedLab also does this in background, so you do not need to provide any credentials.

Software installation

A nasty task (especially in larger labs) is software installation, making sure that the same package is installed on all machines. AutomatedLab also helps here. It leverages the infrastructure provided by the commands, and makes it easier to install packages on all lab machines with a one-liner.

Single package installation

As an example, popular packages to install on all machines are Notepad++ and Wireshark. AutomatedLab needs to know only two things to install the software on all lab machines (or specific ones):

  1. Where is the .exe, .msi, or .msu file?
  2. What is the command-line switch for a silent installation?

The following command installs Notepad++ on the lab’s SQL Server:

Install-LabSoftwarePackage -ComputerName S1Sql1 -Path E:\LabSources\SoftwarePackages\Notepad++.exe -CommandLine /S

The file is copied to the machine (also using Windows PowerShell remoting by the PSFileTransfer module).
The installation is started as a background job, meaning that you do not have to wait until the installation is finished.

In the following command, Wireshark should be installed on all lab machines and Windows PowerShell should wait until all jobs are finished:

Install-LabSoftwarePackage -ComputerName (Get-LabMachine -All) -Path E:\LabSources\SoftwarePackages\WireShark.exe -CommandLine /S -PassThru | Wait-Job

Mass software installations

If many packages should be installed at the same time, the packages can be defined first and then passed to the Install-LabSoftwarePackages cmdlet. Get-LabSoftwarePackage lets you define packages and add them to an array.

The following example installs the classic shell, Wireshark, and Notepad++ on all lab machines:

$labSources = 'E:\LabSources'

$packs = @()

$packs += Get-LabSoftwarePackage -Path $labSources\SoftwarePackages\ClassicShell.exe -CommandLine '/quiet ADDLOCAL=ClassicStartMenu'

$packs += Get-LabSoftwarePackage -Path $labSources\SoftwarePackages\Notepad++.exe -CommandLine /S

$packs += Get-LabSoftwarePackage -Path $labSources\SoftwarePackages\winrar.exe -CommandLine /S

Install-LabSoftwarePackages -Machine (Get-LabMachine -All) -SoftwarePackage $packs -PassThru | Wait-Job

So you can see that managing software in your lab can be a trivial and comfortable task if you use AutomatedLab as your lab framework.

What’s next?

The next post will be a short one about post-installation activities. This is about the same as running scripts or script blocks with Invoke-LabCommand, but it provides a way to set dependencies on ISO images or folders.

~Raimund and Per

Thank you, Raimund and Per—awesome job.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy


PowerTip: Export Certificate by Using PowerShell

$
0
0

Summary: Learn how to use Windows PowerShell to export a certificate.

Hey, Scripting Guy! Question How can I use Windows PowerShell to export a certificate?

Hey, Scripting Guy! Answer Use the Export-Certificate cmdlet and specify the file output destination. By default it will export a file
           of the type CertHere is an example:

Export-Certificate -Cert (Get-Item Cert:\CurrentUser\My\3EF2) -FilePath c:\fso\mycert.cert

Weekend Scripter: Announcing WMI Explorer 2.0

$
0
0

Summary: Microsoft senior support escalation engineer, Vinay Pamnani, talks about WMI Explorer 2.0.

Microsoft Scripting Guy, Ed Wilson, is here. Today I am pleased to invite back Microsoft senior support escalation engineer, Vinay Pamnani, who supports Configuration Manager. He's here to talk about the latest edition of his WMI Explorer tool. Take it away Vinay...

I’m excited to announce that WMI Explorer 2.0 is now available for download from CodePlex: WMI Explorer. Special thanks to Tim Helton for his guidance and support.

Requirements:

New features include: 

  • Asynchronous mode for enumeration of classes and instances in the background
  • Method execution
  • System Center Configuration Manager (SMS) mode
  • Property tab, which shows properties of a selected class
  • Input and output parameter information with Help on the Methods tab
  • List view output mode for query results
  • Update notifications when a new version of WMI Explorer is available
  • Ability to connect to multiple computers at the same time
  • Quick filter for classes and instances
  • User preferences
  • View WMI Provider process information

In addition, the following features have been improved:

  • UI display on higher scaling levels and resolution
  • Connect As option to provide alternate credentials
  • Display of embedded object names in Property grid

Note  This is not an official Microsoft tool, and it is available “as is” with no support.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Find Module to Import

$
0
0

Summary: Use Tab expansion in Windows PowerShell to find a module to import.

Hey, Scripting Guy! Question I know I need the full name of a specific Windows PowerShell module to import it, but I am not sure of the
           complete name, and I do not want to do a lot of typing. The module exists in a known Windows
           PowerShell module location. How can I quickly find the module name?

Hey, Scripting Guy! Answer Use Tab expansion with the Import-Module cmdlet. You only need to know the first letter
           of the module, then you can tab through the module names.

Weekend Scripter: Simplify to Troubleshoot PowerShell Script

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, helps an old friend fix his script.

Microsoft Scripting Guy, Ed Wilson, is here. I am a big believer in simplicity. I enjoy simple things: a nice cup of hot tea, reading a good book, watching a sunset on the beach, taking a train ride through the Alps, or spending an evening conversing with friends. Simplicity is one of the things I like about Windows PowerShell. It can take a complex operation and make it simple. Desire State Configuration (DSC) is one such tool.

When I see a Windows PowerShell script that is complex, I naturally begin to look at it to see if there are areas of improvement. Most of the time, there is. Occasionally and unfortunately, the answer is, "Nope." This is because Windows PowerShell is ever evolving, and it continues to improve. We know there are things that we still need to simplify. To see what we are currently working on, check out Windows Management Framework 5.0 Preview.

At times, a source of the complexity is the approach to solving the problem. As I stated in my book Windows PowerShell Best Practices, when I begin with a technology, I can back myself into a corner. Instead, I begin with a desired outcome, and then I try to see the best way to get there. The following examples demonstrate the difference.

     A. “How can I use WMI to find the IP address of my active network adapter?” (This assumes that I will need to use WMI to find the answer.)

As opposed to:

     B. “I need to find the IP address of my active network adapter. How can I do this?” (This approach states the task, and then searches for the best solution.)

Practical example

I received an email the other day, and the writer said he was having problems outputting the information he was getting from WMI about his network adapters to a .csv file. Here is his script:

##Code##

# truncated function.

# provide variables

$this = @() # initialise results table as an array.

$ip = "mylaptop.mydomain.com" # some box with –gt 1 Nic. 

foreach ($connected_nic in ($Adapter = Get-WmiObject -computer $ip win32_networkadapter -filter "NetConnectionStatus='2'" | where {$_.PNPDeviceID -notmatch "1394"})) { # Find electrically connected adapters, which are not firewire.

# Write-host -ForegroundColor Green $Connected_nic

if ($cfg=($dns=(Get-WmiObject -ComputerName $ip Win32_NetworkAdapterConfiguration -filter "Index = '$($connected_nic.Index)'")).IPaddress -like "192.168.*") { # test each connected adapter to see if it's IP = 192.168.X.X

$ips = $dns | select –expandproperty Ipaddress

# $dns # check input is as expected

$results = [ordered] @{

"Netbios Name" = $dns.DNSHostname

"IPv4 Address" = $ips

"Subnet Mask" = [String]$dns.IpSubnet

"Default Gateway" = [String]$dns.DefaultIPGateway

"Primary DNS Server" = $dns.DNSServerSearchOrder[0]

"Secondary DNS Server" = $dns.DNSServerSearchOrder[1]

"MAC Address" = $dns.MACaddress

}

$dnsout = New-Object PSObject -Property $results # Create a new row for the report from our hash table.

$this += $dnsout # Add this row to the main report.

} # Configurations of interest test.

} # End connected adapter test

$this

A few changes to the script

One of the first changes I make to a script, if I can, is I change Get-WmiObject to Get-CimInstance. Since Windows PowerShell 3.0, I can use Get-CimInstance. It is faster and more robust, and it permits lots of cool things for retrieving data (such as using Cim-Sessions). So this change is always a no-brainer for me.

The next thing I noticed is that he makes a WMI query, and then he pipes the results to Where-Object to do additional filtering. Here is that portion of the script:

$Adapter = Get-WmiObject -computer $ip win32_networkadapter -filter "NetConnectionStatus='2'" | where {$_.PNPDeviceID -notmatch "1394"}

I imagine the reason he did this is because he does not know how to make a compound where clause in a WMI query. Here is my revision of this portion of the script:

Get-CimInstance Win32_NetworkAdapter -Filter "NetConnectionStatus = 2 AND PNPDeviceID != 1394"

The next thing he does is look to see if an IP address is similar to a particular range. He stores several different levels in different variables, and then he uses the Select-Object cmdlet to expand the IPaddress property. Here is his command:

if ($cfg=($dns=(Get-WmiObject -ComputerName $ip Win32_NetworkAdapterConfiguration -filter "Index = '$($connected_nic.Index)'")).IPaddress -like "192.168.*") { # test each connected adapter to see if it's IP = 192.168.X.X

$ips = $dns | select –expandproperty Ipaddress

# $dns # check input is as expected

This is part of the "Why are you doing this?" type of question. Part of his problem was outputting to a .csv file. If you output to a .csv file, and you have objects stored inside other objects, you end up with properties that say System.String[] or System.Object[], instead of the actual information you are seeking. Here is a sample of that output:

Image of command output

Often, I do not have to have a .csv file. This is because most applications accept XML input. If I have XML, it knows how to handle embedded objects, and they expand as different nodes. I could even output to XML, and then read the XML to create a .csv file if I had to have one. It might be simpler than creating the objects in code.

Because I decided that I did not have to output to XML, I was able to simplify this portion of the script to another single line (% is an alias for the Foreach-Object cmdlet):

% {Get-CimInstance Win32_NetworkAdapterConfiguration -Filter "Index = $($_.index)"

Now he builds up his output object. First he creates an ordered list (this is how I knew that I could use Get-CimInstance). In the ordered list, he chooses the properties he wants to keep. After he does this, he then creates another PSObject, and then he uses += to add the objects together. Finally, he outputs the objects, and as he told me in the email, he was going to write the output to a .csv file. Here is his script:

$results = [ordered] @{ 

"Netbios Name" = $dns.DNSHostname

"IPv4 Address" = $ips

"Subnet Mask" = [String]$dns.IpSubnet

"Default Gateway" = [String]$dns.DefaultIPGateway

"Primary DNS Server" = $dns.DNSServerSearchOrder[0]

"Secondary DNS Server" = $dns.DNSServerSearchOrder[1]

"MAC Address" = $dns.MACaddress

}

$dnsout = New-Object PSObject -Property $results # Create a new row for the report from our hash table.

$this += $dnsout # Add this row to the main report. 

} # Configurations of interest test.

} # End connected adapter test

$this

He did not need to do all of that. I created a custom object, but I used my preferred method for doing this—the Select-Object cmdlet. I choose the properties he was interested in obtaining, and then I output the objects to the .xml file. Here is my command:

Select DNSHostname, IPSubnet, defaultIPGateWay, DNSServerSearchOrder, MacAddress
} | Export-Clixml -Depth 10 -Path c:\fso\ipreport.xml

That is it. Here is my complete script. It is one logical line of code:

Get-CimInstance Win32_NetworkAdapter -Filter "NetConnectionStatus = 2 AND PNPDeviceID != 1394" |

% {Get-CimInstance Win32_NetworkAdapterConfiguration -Filter "Index = $($_.index)" |

Select DNSHostname, IPSubnet, defaultIPGateWay, DNSServerSearchOrder, MacAddress} |

Export-Clixml -Depth 10 -Path c:\fso\ipreport.xml

In addition to being shorter and easier to understand, my revised script works. If I had stayed with the limitation to output to a .csv file, I would have doomed myself to writing something similar to what the original query wanted to do. I could shorten it a bit, but it still would be rather complex. However, by freeing myself from assumptions as to what the solution would be, I was able to come up with a better, easier solution.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Query Cluster Shared Volumes' Free Space

$
0
0

Summary: Learn how to use Windows PowerShell to query for free space in Cluster Shared Volumes.

Hey, Scripting Guy! Question How can I use Windows PowerShell to find the free space in all my Cluster Shared Volumes for Hyper-V in
           Windows Server 2012 R2 or Windows Server 2012?

Hey, Scripting Guy! Answer Use the Get-ClusterSharedVolume cmdlet with the –Cluster parameter and your Hyper-V cluster name.
           Loop through each cluster shared volume and output a Windows PowerShell custom object to the pipeline.
           The following example is a single-line command broken to display on the webpage:

Get-ClusterSharedVolume -Cluster HV-CLUSTER1 |

ForEach-Object {

[PSCustomObject]@{VolumeName = $_.Name; FreeSpace =$_.SharedVolumeInfo.Partition.FreeSpace}}

Note  Today’s PowerTip is provided by Microsoft premier field engineer, Brian Wilhite

Viewing all 3333 articles
Browse latest View live




Latest Images