Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

Automating DiskPart with Windows PowerShell: Part 5

$
0
0

Summary: Use Windows PowerShell to build scripts to automate DiskPart.

Hey, Scripting Guy! Question Hey, Scripting Guy! Can we build a DISKPART script to automatically format USB drives as bootable?

—SH

Hey, Scripting Guy! Answer Hello SH,

Honorary Scripting Guy, Sean Kearney, here. I’m filling in for our good friend, Ed Wilson. It’s Friday and Ed has had a long week (and a lot of feedback, I suspect, from my bad puns). So I suspect he’ll be all “tied up” with email.

We have pulled together a really cool function called Get-DiskPartInfo, which automates the use of DiskPart to a point that’s it’s information is now an object that we can consume with Windows PowerShell.

Note This is the final part in a series. If you are behind, please read:

Let’s look at a basic DiskPart script to make a USB key bootable again:

SELECT DISK 2
CLEAN
CREATE PARTITION PRIMARY
FORMAT FS=NTFS QUICK
ASSIGN
ACTIVE

With our current advanced function, we can already identify USB flash drives and hard drives. Because we can isolate them down to size, we can make a fairly educated guess about devices that are removable USB keys.

Educated guess? Well, the one problem that I haven’t been able to figure out an answer for is how to separate a hard drive USB device from a USB flash drive. That information is not presented in DiskPart.

But I can suggest that I think most of the USB flash drives I have are going to be under a certain size…let’s say 32 GB. And for my purposes (I would like to extend this to Microsoft Deployment Toolkit (MDT) 2012), I can probably suggest that they won’t be smaller than a certain size either—say 8 GB.

Now let’s start building a new advanced function called Initialize-USBBoot. What we are going to do is build the script that is needed to make the keys bootable in DiskPart:

Function INITIALIZE-USBBOOT()
{

[cmdletbinding()]
Param()

First, we’re going to identify the parameters for our bootable devices: USB drives between 7.5 GB and 65 GB:

$TYPE=’USB’
$MIN=7GB
$MAX=65GB

And now that we have a cool new way to parse DiskPart, this all gets so much easier:

$DRIVELIST=(GET-DISKPARTINFO | WHERE { $_.Type –eq $TYPE –and $_.DiskSize -lt $MAX and $_.DiskSize –gt $MIN })

This will return all drives that are seen by DiskPart, including their identified DiskID numbers, which we can use to build a single script for DiskPart.

Again, I’m going with a “simple is best” approach when I build the content. First, I’ll create the file for the DiskPart script:

NEW-ITEM -Path bootemup.txt -ItemType file -force | OUT-NULL

Then I step through every drive in the list and obtain its DiskID from DiskPart:

$DRIVELIST | FOREACH {
$DISKNUM=$_.DISKNUM

Now I’ll build the script. Because it’s simply a serial set of commands, we can build one script to do all the work:

ADD-CONTENT -Path bootemup.txt -Value "SELECT DISK $DiskNum"
ADD-CONTENT -Path bootemup.txt -Value "CLEAN"
ADD-CONTENT -Path bootemup.txt -Value "CREATE PARTITION PRIMARY"
ADD-CONTENT -Path bootemup.txt -Value "FORMAT FS=FAT32 QUICK"
ADD-CONTENT -Path bootemup.txt -Value "ASSIGN"
ADD-CONTENT -Path bootemup.txt -value "ACTIVE"

}

}

Now with this in place, I can run the following script:

INITIALIZE-USBBOOT
DISKPART /S .\BOOTEMUP.TXT

Now we can plug-in a series of USB keys that fit those parameters and wipe them clean for booting!

How does MDT 2012 fit into all of this?

Let’s assume that you have a folder called C:\DeploymentContent, and you need to be able to have a simple solution for technicians to build their keys—a solution that means consistency in the process.

In Windows PowerShell, we can launch Robocopy.exe like any other application, but also pass parameters to it. Because our new Get-DiskPartInfo cmdlet will also return the drive letter for those USB keys, we can identify our USB flash keys with those same parameters, and pass the results to Robocopy.exe. Here’s a sample script that could meet this need:

$TYPE=’USB’
$MIN=7GB
$MAX=65GB

$DRIVELIST=(GET-DISKPARTINFO | WHERE { $_.Type –eq $TYPE –and $_.DiskSize -lt $MAX -and $_.DiskSize –gt $MIN })

$DRIVELIST | FOREACH {

$Source=”C:\DeploymentContent\”
$Destination=$_.DriveLetter

ROBOCOPY $Source $Destination /E

}

There you have it! A bit of work to play with, but now we have an almost single-click solution to build those deployment keys. You could even leverage this to easily erase media keys and deploy documentation or client media.

By the way, if you don’t feel like typing, this entire solution is uploaded as a module on the Script Center Repository: Automate Creation of Bootable USB Keys with PowerShell.

And remember the choice is yours, as is the power…with Windows PowerShell!

I invite you to follow us on Twitter and Facebook. If you have any questions, send email to Ed at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Sean Kearney (filling in for our good friend Ed Wilson),
      Honorary Scripting Guy, Windows PowerShell MVP
           …and good personal friend of the BATCHman


PowerTip: Capture Console Application Data with PowerShell

$
0
0

Summary: Use Windows PowerShell to capture console application data.

Hey, Scripting Guy! Question How can I parse the output from a console application by using Windows PowerShell?

Hey, Scripting Guy! AnswerRun the application as normal, but assign it to a Windows PowerShell object—for example,
         DriverQuery.exe will display output to the screen:

$INFO=(DriverQuery.exe)

This creates an array that you can pipe to Get-Member to discover available methods or to access individual lines:

$INFO[3]

$INFO[-1].substring(4,7)

Weekend Scripter: Creating ACLs for Windows Azure Endpoints—Part 1 of 2

$
0
0

Summary: Windows networking engineer, James Kehr, discusses using Windows Azure cmdlets to create endpoint ACLs.

Microsoft Scripting Guy, Ed Wilson, is here. This weekend we have a two-part series from guest blogger, James Kehr. It is really cool. Take it away, James…

Greetings my fellow netizens. James Kehr here with the Windows Networking Escalation Support team to discuss Windows Azure and how you can use the Windows Azure cmdlets to create endpoints and endpoint access control lists (ACLs). Before diving into the commands, I think it best that I take a little time to explain Windows Azure networking basics, including a quick primer about exactly what an ACL is. If you are familiar with these details feel free to skip a bit.

Only the basics

When I say basics, I really mean “the basics.” A deep dive into how networking in Windows Azure works would take far too long, so I shall summarize—and that will still take some time. So let’s begin…

Every organization within Windows Azure lives in their own little networking world called a virtual network (VNET). Each of these VNETs are isolated to the point where they cannot intrude into another organization’s VNET. Add a little networking wizardry, and this gives Microsoft the ability to securely host several thousand organizations that are simultaneously sharing the same three private IPv4 address spaces within the same data center. Servers and services within a VNET can only be accessed in three ways:

  • By other servers and services in the same VNET
  • Through a Windows Azure gateway (VPN)
  • Through a public endpoint

From a network security perspective, the first two options are not very bothersome. Your servers and services are supposed to talk to each other, after all. If they are not supposed to, you can stick them in separate VNETs and they will no longer be able to. Likewise, the gateway part is not worrisome because site-to-site and point-to-site Windows Azure gateways (VPNs) use data encryption and integrity methods to ensure that no one can snoop on or modify your data in-flight. It is the public endpoint part that worries the people who worry about network security. Why? By default your public endpoints are just that. Public. To the entire Internet. Which is good or bad—depending on what your cloud service is used for.

This brings us to the main point of this blog post: ACLs.

Access control lists are not new, nor are they unique to Windows Azure. They are a very common networking method that is employed by networking security types to limit access to a network endpoint. Think file permissions for networking.

The ACL itself is a set of ordered rules that are applied to a networking endpoint. Each rule generally consists of a rule order number, a subnet, and one of two actions: permit or deny. Please be aware that subnets in Windows Azure are defined by using CIDR notation. When a network frame arrives at an endpoint with an ACL, the network device will process the frame against the rules, in order, and decide whether to permit or deny the traffic. When denied, the traffic is blocked. When permitted, the traffic is passed on to the destination. When a rule match is found, ACL processing stops.

The endpoints in Windows Azure are virtual, but ACLs still apply. When you create a cloud service, such as a virtual machine, the service is given a public IP address, a private IP address, and one or two endpoints. Each endpoint has a public and a local side that are connected together. This is called an endpoint map. The public side is exposed to the Internet, and by default, it is completely open. The local side is the TCP or UDP port that your server or service will be listening on.

Image of menu

For example, when a virtual machine is created in Windows Server, a Remote Desktop Protocol (RDP) endpoint is added. A public port of 60000 may be opened on the public IP with a corresponding local port of 3389 opened on the private IP. If you try to connect to the server from the Internet by using port 3389, the connection will fail. When you connect to the public IP on port 60000, the traffic will be forward to port 3389 on the private IP address, which is bound to the virtual machine and allows you to connect a remote desktop session from the Internet.

Pop quiz time! If you want to connect to an Windows Azure virtual machine through a Windows Azure gateway or from a different server on the same VNET, what port should you use? Answer: 3389.

In both scenarios you are bypassing the public endpoint, and you do not need to worry about endpoint mappings or ACLs. The one exception is if you use the public name for the server (for example, service.cloudapp.net), which would take you through the public endpoint. Depending on routing…

I’ll stop there. You get the idea.

I think this is enough of a Windows Azure networking primer, so let’s move along to the Windows PowerShell parts, shall we?

Add ACLs to a single endpoint

As of this writing, you need Windows PowerShell to create endpoint ACLs in Windows Azure. More specifically, you need the Windows Azure module for Windows PowerShell. This feature may show up in the management portal very soon, maybe even by the time you read this, but this is still good information to have for automation and general Windows Azure cmdlet knowledge. For information about how to download the Windows Azure module for Windows PowerShell and how to set it up, see Get Started with Windows Azure Cmdlets.

After setup, you can add an endpoint and an ACL by using Windows PowerShell. For our examples today, you can imagine the following scenario…

You are setting up a web server, Web01, in Windows Azure for an Intranet site. The only IPs you want to access the website are the app server, App01 (1.2.3.4), and the corporate proxy servers. The proxy servers will be on an inaccurate subnet, because they cannot route across the IP address range of 172.16.0.0/39.

The script looks like this:

# create an endpoint for HTTP

Get-AzureVM -ServiceName Web01 | Add-AzureEndpoint -Name "HTTP" -Protocol tcp -PublicPort 80 -LocalPort 80 | Update-AzureVM

 

# create a new ACL

$acl = New-AzureAclConfig

 

# add some rules to the ACL

Set-AzureAclConfig -AddRule -ACL $acl -Order 0 -Action Permit -RemoteSubnet "1.2.3.4/32" -Description "Allow App01"

Set-AzureAclConfig -AddRule -ACL $acl -Order 1 -Action Permit -RemoteSubnet "172.16.0.0/29" -Description "Allow corp proxies"

Set-AzureAclConfig -AddRule -ACL $acl -Order 2 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

 

# apply the ACL to the HTTP endpoint

Get-AzureVM -ServiceName Web01 | `

Set-AzureEndpoint -ACL $acl -Name "HTTP" -Protocol TCP -PublicPort 80 -LocalPort 80 |`

 Update-AzureVM

Let’s break down how this works. The first thing you may notice is that I am getting the AzureVM object and passing that through the pipeline:

Get-AzureVM -ServiceName Web01 | …

Why would I do this when there is a -VM parameter for Add-AzuerEndpoint? Because older versions of Windows Azure cmdlets don’t work when you store the VM object in a variable and then pass the variable to the parameter. This issue was fixed in the August 2013 release, but rather than tell everyone to upgrade I wrote the post by using the most compatible code possible.

The second thing you may notice is that I used ServiceName and not the virtual machine name. When dealing with endpoints and ACLs this is the preferred way to do it because of the way load-balanced endpoints work. It’s a good habit to get into so I use this method exclusively. I’ll cover a bit more of the why in Part 2. The ServiceName is usually the same as the virtual machine name. The Cloud Services section of the management portal has the list of all the service names and the associated virtual machines and services on the Instances tab, in case you don’t know. You can also use Get-AzureVM to grab the list.

… Add-AzureEndpoint -Name "HTTP" -Protocol tcp -PublicPort 80 -LocalPort 80 | …

This cmdlet accepts the Windows Azure VM object from the pipeline and adds the endpoint to all the virtual machines in the service. If you only want the endpoint added to a single virtual machine in a service, add the -name <vm name> parameter to the Get-AzureVM cmdlet.

We give this endpoint a name, protocol, public port, and local port. This is the minimum you need to define the endpoint. You can add the ACL when you create the endpoint, but for demonstration purposes, I will not do this.

… Update-AzureVM

When editing endpoints, ACLs, and other Windows Azure VM objects, they do not get committed until you run Update-AzureVM. And that’s really all that needs to be said.

The next step is the ACL creation. The best way to do this is to create a blank ACL and then add rules. The blank ACL is created with this command:

$acl = New-AzureAclConfig

Then the rules are added to the ACL with Set-AzureAclConfig:

Set-AzureAclConfig -AddRule -ACL $acl -Order 0 -Action Permit -RemoteSubnet "1.2.3.4/32" -Description "Allow App01"

Set-AzureAclConfig -AddRule -ACL $acl -Order 1 -Action Permit -RemoteSubnet "172.16.0.0/29" -Description "Allow corp proxies"

Set-AzureAclConfig -AddRule -ACL $acl -Order 2 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

ACL rules are processed in order, starting with rule 0. When there is a rule match, the remaining rules are ignored. So when IP 1.2.3.4 connects to this endpoint, only a single rule is processed, and access is permitted. When 5.6.7.8 tries to connect, all three rules are processed, and access is denied.

This is important to remember when planning your ACL rules. Where you put deny and permit rules can greatly affect endpoint access and performance. Speaking of performance, keep the number of rules to a minimum because large rule lists can be performance impacting.

Windows Azure ACL rules need an order, an action, a subnet, and the ACL that the rule belongs to. The order number may appear optional, but it is not. When no order is provided, the Rule IDs are given incremental numbers, but the orders are all zero. When you subsequently try to add the ACL to the endpoint, you get a nasty red error back from Windows Azure. So add those order numbers.

The description is how the rule shows up in the Get-AzureAclConfig and Get-AzureEndpoint cmdlets. I recommend keeping those short.

Set-AzureAclConfig -AddRule -ACL $acl -Order 0 -Action Permit -RemoteSubnet "1.2.3.4/32" -Description "Allow App01"

The first rule contains a very interesting subnet mask: /32 (255.255.255.255). This is how you add a single IP address, for example, the IP address of App01:

Set-AzureAclConfig -AddRule -ACL $acl -Order 1 -Action Permit -RemoteSubnet "172.16.0.0/29" -Description "Allow corp proxies"

The second rule adds the organization’s proxy server subnet. Here, the /29 (255.255.255.248) subnet mask is used to show how a range of IPs are added:

Set-AzureAclConfig -AddRule -ACL $acl -Order 2 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

Finally, we block everything else. 0.0.0.0/0 covers the entire IPv4 address range. Because ACL rules are processed in order, I do not need to worry about the proxies or App01 getting denied because the DenyAll rule will never be processed when those systems connect to the endpoint.

Get-AzureVM -ServiceName Web01 | `

Set-AzureEndpoint -ACL $acl -Name "HTTP" -Protocol TCP -PublicPort 80 -LocalPort 80 |`

 Update-AzureVM

The final step in our script is to add the ACL to the endpoint and commit the changes. Again, I get the Windows Azure virtual machine by service name, pass that to the pipeline where the Set-AzureEndpoint cmdlet applies the ACL to the endpoint, and the Update-AzureVM cmdlet commits the changes.

With these rules in place, no one can access my website unless they have VPN access, use my corporate proxy servers, or are signed in to App01 via RDP. That makes this a pretty secure endpoint.

That’s it for Part 1. Tune in tomorrow for Part 2, where I will discuss removing rules, changing rules, and load-balanced endpoint.

~James

Thank you, James! I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Parse Text Files

$
0
0

Summary: Quickly search files for text with Windows PowerShell.

Hey, Scripting Guy! Question How can I use Windows PowerShell to quickly search text files for a string?

Hey, Scripting Guy! Answer Use the Select-String cmdlet and supply a path and a pattern.
          In the following example, I search the current folder for the computer name dc13:

Select-String -Path *.txt -Pattern 'dc13'

Weekend Scripter: Creating ACLs for Windows Azure Endpoints—Part 2 of 2

$
0
0

Summary: Windows networking engineer, James Kehr, continues talking about how to us Windows Azure cmdlets to create endpoint ACLs.

Microsoft Scripting Guy, Ed Wilson, is here. Welcome back guest blogger, James Kehr, for Part 2 of 2…

If you missed Part 1 of this blog post, I recommend that you give that a read before digging in to Part 2:

Weekend Scripter: Creating ACLs for Windows Azure Endpoints—Part 1 of 2

I am not going to dive as deep into Window Azure network security in Part 2, so this post may not make sense if you have not read Part 1. Just so you know…

Editing Windows Azure endpoint ACLs

I like to measure how dry a subject is by how dull the title is…

“Journey to the Center of the Earth.” That’s an exciting title, and a great book.

“Editing Azure Endpoint ACLs.” Now that makes we want to fall asleep. That title should be added to every dictionary in the universe as an example of both dry and dull. I really tried to think of a snappy title, but nothing came to mind. I will try to compensate with livelier text, but I don’t know how exciting I can make endpoint ACLs.

ACLs help make networks secure. If you don’t understand why ACLs do this, then stop here and go read Part 1. Seriously…go read it. Part 2 isn’t going vanish in the ten minutes it takes to read Part 1. Of course, knowing my luck, that will happen, and I’ll get an angry emails when it does. If this does happen, I apologize in advance. And I hope that the reason why is a short-lived technological blip and not the world coming to a sudden and abrupt end.

Back to the blog…

ACLs make networks more secure by restricting access to network resources. With Windows Azure, you can create ACLs on Internet-facing endpoints so your cloud stuff is safer stuff. The issue is that stuff changes. IP addresses changes, access needs change, mistakes are made by people who don’t read the Hey, Scripting Guy! Blog, and a number of other things can happen. When stuff like this does happen, you need to update ACL rules. Fortunately for you, I happen to know how to do this. I’m even willing to share.

The first thing we need to do is get the current endpoint ACL rules and throw that object into a variable:

$acl = Get-AzureVM -ServiceName Web01 | Get-AzureAclConfig -EndpointName "HTTP"

You need to specify the exact EndpointName, or you could end up with the wrong rules or no rules at all. If you are wondering why I use ServiceName and not the virtual machine name…Part 1.

The next step is a little fuzzy because it changes with each scenario. To make this process easier to understand and write, I’ll continue with the example I used in Part 1. I have a server, Web01, hosting an Intranet site in the Windows Azure cloud. The port 80 endpoint in Windows Azure is secured by ACLs so only the app server and the corporate proxy servers can access the site.

Recently, around the time Part 2 was started, a second app server was added, and our imaginary devs opened a ticket because App2, IP 1.2.3.5, cannot access the website. It is now our task, as network security types, to grant App2 access.

The ACL rules, as shown by $acl output, look something like this:

RuleId

Order

Action

RemoteSubnet

Description

0

0

Permit

1.2.3.4/32

Allow App1

1

1

Permit

176.16.0.0/29

Allow corp proxies

2

2

Deny

0.0.0.0/0

DenyAll

To provide examples of how to remove and edit a rule, I will perform the update in three ways. Before I begin, you need to remember that ACL rule processing is stopped at the first rule match. If I were to simply add a rule for App2, it would show up as rule #3, the DenyAll rule #2 would apply first, and App2 would not have access.

Regardless of whether you are adding, removing, or editing a rule, all three rule options use the Set-AzureAclConfig cmdlet. The difference is in the rule parameter you call: AddRule, RemoveRule, or SetRule.

Each rule parameter has a different purpose and set of mandatory parameters, even though the root cmdlet is the same. AddRule was discussed in Part 1, and the other two will be explained in the three solutions to our example problem.

If you need to remove an endpoint while testing, this is the command you need to know:

Get-AzureVM -ServiceName <ServiceName> | Remove-AzureEndpoint <Endpoint name> | Update-AzureVM

Solution 1

The first solution provides an example of how to remove a rule. The full script looks like this:

# get existing ACL

$acl = Get-AzureVM -ServiceName Web01 | Get-AzureAclConfig -EndpointName "HTTP"

 

# remove DenyAll

Set-AzureAclConfig -RemoveRule -RuleId 2 -ACL $acl

 

# add permit App2 and DenyAll in correct order

Set-AzureAclConfig -AddRule -ACL $acl -Order 2 -Action Permit -RemoteSubnet "1.2.3.5/32" -Description "Allow App2"

Set-AzureAclConfig -AddRule -ACL $acl -Order 3 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

 

# commit changes

Get-AzureVM -ServiceName Web01 | Set-AzureEndpoint -ACL $acl -Name "HTTP" -Protocol TCP -PublicPort 80 -LocalPort 80 | Update-AzureVM

The remove command is pretty simple. Tell the Set-AzureAclConfig -RemoveRule command where the ACL is stored and which rule number needs the axe:

Set-AzureAclConfig -RemoveRule -RuleId 2 -ACL $acl

Add the App2 rule, the DenyAll rule, commit changes, done. Not very hard, but not the most efficient way to do it.

Solution 2

This solution is more efficient by one whole line of code. You’re amazed, I know. This solution involves changing rule #2, and adding DenyAll at rule #3. The ACL update part of the Solution 1 script now looks like this:

# change rule #2 to permit App2

Set-AzureAclConfig -SetRule -ACL $acl –RuleId 2 -Order 2 -Action Permit -RemoteSubnet "1.2.3.5/32" -Description "Allow App2"

 

# add DenyAll

Set-AzureAclConfig -AddRule -ACL $acl -Order 3 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

If you look at the -SetRule version of the cmdlet, you will notice that it looks almost exactly like the -AddRule version. The only difference besides the rule parameter is the addition of the RuleID. SetRule changes the rule matching the rule plus order numbers to whatever values you put in Action, RemoteSubnet, and Description. All of these parameters are mandatory.

Solution 3

The final solution is a sneaky one and the most efficient way to make the change. Take a minute and see if you can figure it out…

Don’t worry about me, I can wait. I’ll give you a clue: Look at the IP addresses of App1 and App2.

When you’re ready the answer is:

# update rule to permit both App servers

Set-AzureAclConfig –SetRule -ACL $acl -RuleId 0 -Order 0 -Action Permit -RemoteSubnet "1.2.3.4/31" -Description "App servers" 

For those who didn’t get it, perhaps this will help. App1 has an IP address of 1.2.3.4, App2 uses 1.2.3.5. A /32 subnet is a single IP address, a /31 subnet is … 2 IP addresses.

The RemoteSubnet parameter is used to define the subnet, yes. But on this end of the conversation, it just means an IP address range, and it is not used to define the network boundaries. The ACL process does not care what your network, gateway, broadcast, or HSRP IP addresses are. It only cares whether the source IP address matches one of the rule IP masks. This means that 1.2.3.4/31 will work for both App1 and App2 without adding rules to the ACL or compromising the endpoint.

Gently give your monitor a high-five if you figured out that one.

Load-balanced endpoints

Our last topic is the original reason I wrote this blog post. Everything I have written so far is to help novices to Windows Azure and ACLs gain enough background knowledge to understand the load-balanced endpoint ACLs in Windows Azure.

Load-balanced endpoints allow a single public IP, called a VIP (virtual IP), to send network traffic to multiple private IP addresses. I won’t go into much detail on how load-balanced endpoints work because there are a number of resources that explain the topic. If you want to learn more, check out this topic in the Windows Azure documentation: Load Balancing Virtual Machines.

For the purposes of this post, I will append our example with this scenario. The devs have recently added a second web server (Web02) and created an IIS web farm. This web farm will host a secure site by using port 443, which must remain up in the event of a server reboot and must be load balanced. Your job is to create the load-balanced endpoint and set up identical ACL rules to the Web01 port 80 ACL.

I sound like a MCSE exam question…

Which Windows Azure cmdlet should you use?

a)       Make-MagicHappen

b)      Add-AzureEndpoint

c)       Set-AzureLoadBalancedEndpoint

d)      Love-PowerShell

If you chose answer b), you are correct. If you answered c), you are half right. If you answered a), please share your magical cmdlet with the rest of the world. And if you answered d), The Scripting Guy will probably give you a passing grade on the non-certification exam anyway.

The Add-AzureEndpoint cmdlet is what creates the endpoint. It can be used to add the ACL at the same time, which makes it the most correct answer. The Set-AzureLoadBalancedEndpoint (New-Alias Set-ALBE Set-AzureLoadBalancedEndpoint) can add an ACL to a load-balanced endpoint, but only if the endpoint already exists. I will use these cmdlets in my script so you have an example of both.

# create new load balanced endpoint

Get-AzureVM -ServiceName Web01 | `

Add-AzureEndpoint -Name "HTTPS" -Protocol TCP -PublicPort 443 -LocalPort 443 -ProbePort 3443 -ProbeProtocol HTTP -ProbePath "/” -LBSetName "Web-HTTPS-LB" | `

Update-AzureVM

 

# create ACL

$acl = New-AzureAclConfig

Set-AzureAclConfig -AddRule -ACL $acl -Order 0 -Action Permit -RemoteSubnet "1.2.3.4/31" -Description "Allow App servers"

Set-AzureAclConfig -AddRule -ACL $acl -Order 1 -Action Permit -RemoteSubnet "172.16.0.0/29" -Description "Allow corp proxies"

Set-AzureAclConfig -AddRule -ACL $acl -Order 2 -Action Deny -RemoteSubnet "0.0.0.0/0" -Description "DenyAll"

 

# add ACL to the LB endpoint

Set-AzureLoadBalancedEndpoint -ServiceName Web01 –LBSetName "Web-HTTPS-LB” -ACL $acl -Protocol TCP -LocalPort 443 –ProbeProtocolHTTP –ProbePath "/" -ProbePort 3443

The Add-AzureEndpoint cmdlet is using new parameters that are required for creating load-balanced endpoints:

Add-AzureEndpoint -Name "HTTPS" -Protocol TCP -PublicPort 443 -LocalPort 443 -ProbePort 3443 -ProbeProtocol HTTP -ProbePath "/” -LBSetName "Web-HTTPS-LB"

ProbePort: This port is used to test whether the service is up. There is a gotcha with the probe port. Your probe port cannot be the same as the local port or you’ll get a nasty error message when you try to add an ACL.

ProbeProtocol: Your options are TCP or HTTP. When set to TCP, the probe only needs to make a simple TCP connection, and the service is considered up. When set to HTTP, you must add the ProbePath parameter to test the probe.

ProbePath: This is the path to your keep alive page …basically. This is the HTTP path minus the domain/IP, where “/” is considered the site root. If you have an actual keep alive page, the probe path would look something like this “/keep-alive.html”. The load-balanced probe will connect to this path on the private IP address and the probe port. If an HTTP 200 status code is returned, the probe is considered up, any other status code and the service is considered down. No redirectors are allowed for getting to the keep alive page!

LBSetName: This is kind of self-explanatory. The endpoint Name is what the individual endpoints are called, the LBSetName is the name of the group of individual endpoints that make up the load-balanced endpoint.

If you don’t want to check a keep alive page, you can use the TCP option, which looks like this:

Add-AzureEndpoint -Name "HTTPS" -Protocol TCP -PublicPort 443 -LocalPort 443 -ProbePort 3443 -ProbeProtocol TCP -LBSetName "Web-HTTPS-LB"

No matter which probe protocol you use, you need to bind the probe port to your website, and then open a port on the server’s firewall. This should not be considered a security concern because the probe port is not exposed to the Internet. Only internal Windows Azure services and other servers and services in the same VNET can access the probe port.

This is also where using ServiceName in the Get-AzureVM cmdlet comes in handy. If you specify the virtual machine name, the endpoint is only applied to that single virtual machine. When you use ServiceName, it is applied to all of the virtual machines in the service. This allows you to apply the endpoint to all the load-balanced virtual machines at one time. Nifty, isn’t it?

Set-AzureLoadBalancedEndpoint -ServiceName Web01 –LBSetName "Web-HTTPS-LB” -ACL $acl -Protocol TCP -LocalPort 443 –ProbeProtocolHTTP –ProbePath "/" -ProbePort 3443

I skipped the ACL creation part because that has been discussed thoroughly. Set-ALBE is a handy cmdlet because you don’t have to use Get-AzureVM or Update-AzureVM. Run and done. This cmdlet is used to update the load-balanced endpoint, and as such, you need to include ALL of the endpoint information, including the ACL, each time you use the cmdlet. In this example, it is used to add the ACL. If you don’t like it, in two steps, you can create the ACL first and add the -ACL parameter to Add-AzureAclConfig cmdlet. After the load-balanced endpoint is created, you can to use Set-ALBE to make changes.

Windows Azure cmdlets are a great way to automate processes in the cloud. Endpoint ACLs are a great way to secure your cloud services and Windows PowerShell is a great tool to set those ACLs. I want to remind you to be careful when planning ACLs. Do not add a ton of rules, and put your frequently used services first to improve performance.

That’s all I have this time around. I hope you learned something new about Windows Azure and Windows PowerShell.

~James

Thanks James, for sharing your time and knowledge. I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Display Progress Bar with PowerShell

$
0
0

Summary: Learn how to display a progress bar by using Windows PowerShell.

Hey, Scripting Guy! Question How can I easily display a progress bar by using Windows PowerShell?

Hey, Scripting Guy! Answer Use the Write-Progress cmdlet:

for ($i = 1; $i -le 100; $i++) {Write-Progress -Activity 'counting' -Status "

$i percent" -PercentComplete $i ; sleep -Milliseconds 20}

Windows PowerShell 3.0 First Steps: Part 1

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a portion from his popular Microsoft Press book Windows PowerShell 3.0 First Steps.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to share with you a portion from my new book, Windows PowerShell 3.0 First Steps, which was recently released by Microsoft Press.

Understanding Windows PowerShell

Windows PowerShell comes in two flavors—the first is an interactive console (sort of like a KORN or a BASH console in the UNIX world) that is built into the Windows command prompt. The Windows PowerShell console makes it simple to type short commands and to receive sorted, filtered, formatted results. These results easily display to the console, but can redirect to XML, CSV, or text files. The Windows PowerShell console offers several advantages, such as speed, low memory overhead, and a comprehensive transcription service that records all commands and command output.

There is also the Windows PowerShell ISE. The Windows PowerShell ISE is an integrated scripting environment, but this does not mean you must use it to write scripts. In fact, many Windows PowerShell users like to write their script in the Windows PowerShell ISE to take advantage of the color syntax highlighting, drop-down lists, and automatic parameter revelation features. In addition, the Windows PowerShell ISE has a feature called the Show Command Add-On, which permits using a mouse to create Windows PowerShell commands from a graphical environment. After created, the command runs directly or is added to the script pane (the choice is up to you). For more information about using the Windows PowerShell ISE, see Chapter 10, Using the Windows PowerShell ISE.

Note  For simplicity, when working with single commands, I show the command and results from within the Windows PowerShell console. But keep in mind that all of the commands also run from within the Windows PowerShell ISE. Whether the command runs in the Windows PowerShell console, in the Windows PowerShell ISE, as a scheduled task, or as a filter for Group Policy, PowerShell is PowerShell is PowerShell. In its most basic form, a Windows PowerShell script is simply a collection of Windows PowerShell commands.

Working with Windows PowerShell

In Windows Server 2012 or Windows 8, Windows PowerShell 3.0 already exists. In Windows 8, you only need to type the first few letters of the word PowerShell on the Start screen before Windows PowerShell appears as an option. The following image illustrates this point. I only typed powbefore the Start screen search box changed to offer Windows PowerShell and an option.

Image of screen

Because navigating to the Start screen and typing poweach time I want to launch Windows PowerShell is a bit cumbersome, I prefer to pin the Windows PowerShell console (and the Windows PowerShell ISE) to the Start page and to the Windows desktop taskbar. This technique of pinning shortcuts to the applications provides single-click access to Windows PowerShell from wherever I may be working.

Image of screen

In Windows Server 2012, it is not necessary to go through the “Start screen, then Search” routine because an icon for the Windows PowerShell console exists by default on the taskbar of the desktop.

Join me tomorrow when I will have another excerpt from my book, Windows PowerShell 3.0 First Steps.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell 3.0 to Resize Partitions

$
0
0

Summary: Use Windows PowerShell 3.0 in Windows Server 2012 or Windows 8 to resize partitions.

Hey, Scripting Guy! Question How can I easily resize partitions in Windows Server 2012 or Windows 8 by using Windows PowerShell 3.0?

Hey, Scripting Guy! Answer Microsoft PFE, Jason Walker, says, “Use the Get-PartitionSupportedSize and the Resize-Partition functions:"

$MaxSize = (Get-PartitionSupportedSize -DriveLetter c).sizeMax

Resize-Partition -DriveLetter c -Size $MaxSize


Windows PowerShell 3.0 First Steps: Part 2

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a portion from his popular Microsoft Press book Windows PowerShell 3.0 First Steps.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to share with you another portion from my new book, Windows PowerShell 3.0 First Steps, which was recently released by Microsoft Press.

To read the first part of this series, see: Windows PowerShell 3.0 First Steps: Part 1.

Understanding the basics of cmdlets

All Windows PowerShell cmdlets behave basically the same way. There are some idiosyncrasies between cmdlets from different vendors, or from teams at Microsoft, but in general, when you understand the way that Windows PowerShell cmdlets work, you can transfer the knowledge to other cmdlets, platforms, and applications.

To call a Windows PowerShell cmdlet, you type it on a line in the Windows PowerShell console. To modify the way the cmdlet retrieves or displays information, you supply options for parameters that modify the cmdlet. Many of these parameters are unique and apply only to certain cmdlets. However, some parameters are applicable to all Windows PowerShell cmdlets. In fact, these cmdlets are part of the strength of the Windows PowerShell design. Called “common parameters,” the parameters that are supported by all Windows PowerShell cmdlets are listed in the next section.

Common Windows PowerShell parameters

All Windows PowerShell cmdlets support common parameters. Each of the common parameters also permits the use of an alias for the parameter. The aliases for each parameter appear in parentheses behind the parameter name in the following lists.

  • Verbose (vb)
  • Debug (db)
  • WarningAction (wa)
  • WarningVariable (wv)
  • ErrorAction (ea)
  • ErrorVariable (ev)
  • OutVariable (ov)
  • OutBuffer (ob)

If a Windows PowerShell cmdlet changes system state (such as stopping a process or changing the startup value of a service), the following two additional parameters become available:

  • WhatIf (wi)
  • Confirm (cf)

Using the Verbose parameter

As an example of using a Windows PowerShell common parameter, we can use the –Verbose parameter to obtain additional information about the action that a cmdlet performs. The following command stops all instances of the Notepad.exeprocess running on the local system, and there is no output from the command:

PS C:\> Stop-Process -Name notepad
PS C:\>

To see what processes stop in response to the Stop-Process cmdlet, use the –Verbose common parameter. In the following example, two separate Notepad.exe processes stop in response to the Stop-Process cmdlet. Because the cmdlet uses the –Verbose common parameter, detailed information about each process appears in the output.

PS C:\> Stop-Process -Name notepad -Verbose
VERBOSE: Performing operation "Stop-Process" on Target "notepad (5564)".
VERBOSE: Performing operation "Stop-Process" on Target "notepad (5924)".
PS C:\>

Using the ErrorAction parameter

When you use the Stop-Process cmdlet to stop a process, if there is not an instance of the specified process running, a nasty error message displays on the Windows PowerShell console. In the following example, the Stop-Process cmdlet attempts to stop a process named notepad.exe, but there are no instances of the notepad.exe process running. Therefore, an error message displays as follows:

PS C:\> Get-Process -Name notepad
Get-Process : Cannot find a process with the name "notepad". Verify the process
name and call the cmdlet again.
At line:1 char:1
+ Get-Process -Name notepad
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (notepad:String) [Get-Process], Proce
   ssCommandException
    + FullyQualifiedErrorId : NoProcessFoundForGivenName,Microsoft.PowerShell.Comma
   nds.GetProcessCommand

PS C:\>

If you know (or at least suspect), that a process is not running, but you would like to verify this, you can use the –ErrorAction common parameter. To hide error messages arising from the Get-Process cmdlet, supply a value of SilentlyContinue for the –ErrorAction parameter prior to running the cmdlet. This technique is shown here:

PS C:\> Get-Process -Name notepad -ErrorAction SilentlyContinue
PS C:\>

Note  The previous command appears to be really long, but keep in mind that Tab expansion makes this easy to type correctly. In fact, the previous command is:

Get-Pro<tab><space>-n<tab><space>notepad<space>-e<tab><space>s<tab>

          You can use the parameter alias –EA instead of typing –ErrorAction (although with Tab expansion,
          it is exactly the same number of keystrokes to shorten the command):

 –E<tab> or –EA

In addition, when you work with the Get-Process cmdlet, the default parameter set is Name. This means that the –Name parameter from Get-Process is the default parameter; and therefore, Get-Process interprets any string in the first position as the name of a process. The revised command is shown here:

PS C:\> Get-Process notepad -ea SilentlyContinue

PS C:\>

If you are not certain about valid values for the –ErrorAction parameter, you can supply anything to the parameter and then carefully read the resulting error message. In the text of the error message, the first two lines state that Windows PowerShell is unable to convert the value to the System.Management.Automation.ActionPreference type. The fourth line of the error message lists allowed values for the –ErrorAction parameter. The allowed values are SilentlyContinue, Stop, Continue, Inquire, and Ignore. This technique of forcing an error message is shown in the following image:

Image of error message

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Find Current PowerShell Error Action Preference

$
0
0

Summary: Learn how to discover your current error action preference in Windows PowerShell.

Hey, Scripting Guy! QuestionHow can I see the current value for my error action preference in Windows PowerShell?

Hey, Scripting Guy! Answer Look at the value of the $ErrorActionPreference variable:

PS C:\> $ErrorActionPreference

Continue

Windows PowerShell 3.0 First Steps: Part 3

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a portion from his popular Microsoft Press book Windows PowerShell 3.0 First Steps.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to share with you another portion from my new book, Windows PowerShell 3.0 First Steps, which was recently released by Microsoft Press.

To read the previous parts of this series, see:

Introduction to the pipeline

The Windows PowerShell pipeline takes the output from one command and sends it as input to another command. By using the pipeline, you are able to do things like find all computers in one specific location and restart them. There are two commands in this request:

  • Find all the computers in a specific location
  • Restart each of the computers.

Passing the objects from one command to a new command makes Windows PowerShell easy to use inside the console because you do not have to stop and parse the output from the first command before taking action with a second command.

Windows PowerShell passes objects down the pipeline. This is one way that Windows PowerShell becomes very efficient. It takes an object (or group of objects) from the results of running one command, and it passes those objects to the input of another command. By using the Windows PowerShell pipeline, it is not necessary to store the results of one command into a variable, and then call a method on that object to perform an action. For example, the following command disables all network adapters on my Windows 8 laptop.

Get-NetAdapter | Disable-NetAdapter

Note  Windows PowerShell honors the Windows security policy. Therefore, to disable a network adapter, Windows PowerShell must run with Admin credentials. For more information about starting Windows PowerShell with Admin credentials, refer to Chapter 1 in Windows PowerShell 3.0 First Steps.

In addition to disabling all network adapters, you can enable them. To do this, use the Get-NetAdapter cmdlet and pipe the results to the Enable-NetAdapter cmdlet as shown here:

Get-NetAdapter | Enable-NetAdapter

If you want to start all of the virtual machines in Windows 8 (or Windows Server 2012), use the Get-VM cmdlet, and pipe the resulting virtual machine objects to the Start-VM cmdlet:

Get-VM | Start-VM

To shut down all of the virtual machines, use the Get-VM cmdlet, and pipe the resulting virtual machine objects to the Stop-VM cmdlet:

Get-VM | Stop-VM

In each of the previous commands, an object (or group of objects) resulting from one command pipes to another cmdlet for further action.

Sorting output from a cmdlet

The Get-Process cmdlet generates a nice table view of process information in the Windows PowerShell console. The default view appears in ascending alphabetical process name order. This view is useful to help find specific process information, but it hides important details, such as which process uses the least or the most virtual memory.

To sort the output from the process table, pipe the results from the Get-Process cmdlet to the Sort-Object cmdlet and supply the property on which to sort to the –Property parameter. The default sort order is ascending (that is smallest numbers appear at the top of the list). The following command sorts the process output by the amount of virtual memory that is used by each process. The processes that consumes the least amount of virtual memory appear at the top of the list.

Get-Process | Sort-Object -Property VM

If you are interested in which processes consume the most virtual memory, you may want to reverse the default sort order. To do this, use the –Descending switch parameter as shown here:

Get-Process | Sort-Object -Property VM –Descending

The command to produce the sorted list of processes for the virtual memory, and the associated output from the command are shown in the image that follows.

Image of command output

It is possible to shorten the length of Windows PowerShell commands that use the Sort-Object cmdlet. The Sort command is an alias for the Sort-Object cmdlet. A cmdlet alias is a shortened form of the cmdlet name that Windows PowerShell recognizes as a substitute for the complete cmdlet name. Some aliases are easily recognizable (such as Sort for Sort-Object or Selectfor Select-Object). Other aliases must be learned, such as ? for the Where-Object (most Windows users expect ? to be an alias for the Get-Help cmdlet). 

In addition to using an alias for the Sort-Object cmdlet name, the –Property parameter is the default parameter that the cmdlet utilizes. Therefore, it can be left out of the command. The following command uses the shortened syntax to produce a list of services by status:

Get-Service | sort status

It is possible to sort on more than one property. You need to be careful doing this because at times it is not possible to sort additional properties. For services, a multiple sort makes sense because there are two broad categories of status: Running and Stopped. It makes sense to attempt to organize the output further to facilitate finding particular stopped or running services.

One way to facilitate finding services is to sort alphabetically the DisplayNameproperty of each service. The script that follows sorts the service objects that are obtained via the Get-Service cmdlet by the status, and then by DisplayNamefrom within the status. The output appears in descending fashion instead of the default ascending sorted listing.

Get-Service | sort status, displayname –Descending

The command to sort services by Status and DisplayName,and the output from the command are shown in the following image.

Image of command output

Join me tomorrow when I will have another excerpt from my Windows PowerShell 3.0 First Steps book.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Sort Objects Based on a Particular Property

$
0
0

Summary: Learn how to use the Sort-Object Windows PowerShell cmdlet to sort on a specific property.

Hey, Scripting Guy! Question How can I sort a collection of Windows PowerShell objects without using the default property?

Hey, Scripting Guy! Answer Use the –Property parameter and specify the name of the property to sort on:

get-childitem | sort-object -property length 

Windows PowerShell 3.0 First Steps: Part 4

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a portion from his popular Microsoft Press book Windows PowerShell 3.0 First Steps.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to share with you another portion from my new book, Windows PowerShell 3.0 First Steps, which was recently released by Microsoft Press.

To read the previous parts of this series, see:

Creating a table

When you have between two and five properties, and you are interested in viewing in columns of data, the Format-Table cmdlet is the tool to use to organize your data. The typical use of Format-Table is to permit delving into specific information in a customizable fashion. For example, the Get-Process cmdlet returns a table with eight columns that contain essential process information. The Get-Process command and the resulting output are shown in the following image:

Image of command output

Choosing specific properties in a specific order

If the eight columns of default process information meet your needs, there is no need to think about using a formatting cmdlet. However, the Process object that is returned by the Get-Process cmdlet actually contains 51 properties and seven script properties. As a result, there is much more information available than only the eight default properties. To dive into this information requires using one of the Format cmdlets. From the perspective of the Get-Process cmdlet, there are six alias properties. Alias properties are great because they can shorten the amount of typing required. The Get-Process alias properties are shown in the output that follows:

13:40 C:\> get-process | get-member -MemberType alias*

   TypeName: System.Diagnostics.Process

Name    MemberType    Definition
----    ----------    ----------
Handles AliasProperty Handles = Handlecount
Name    AliasProperty Name = ProcessName
NPM     AliasProperty NPM = NonpagedSystemMemorySize
PM      AliasProperty PM = PagedMemorySize
VM      AliasProperty VM = VirtualMemorySize
WS      AliasProperty WS = WorkingSet

To use the Format-List cmdlet, you pipe the results from one cmdlet to the Format-List cmdlet and select the property names you want to display.

Note  The order in which the properties appear is the order in which they display in the table.

The following command displays process information from every process on the local system. The specified properties use the alias properties created for the Get-Process cmdlet. The output is in the order of Name, Handles, Virtual Memory Size, and the Working Set.

Get-Process | Format-Table -Property name, handles, vm, ws

The command to produce the formatted list of process information, and the output associated with the command, are shown in the following image:

Image of command output

Note  The Get-Process cmdlet has an alias of GPS, and the Format-Table cmdlet has an alias of FT. Therefore, the command to return a table of process information can be shortened to the following:

GPS | FT name, handles, vm, ws

Join me tomorrow when I will have another excerpt from my Windows PowerShell 3.0 First Steps book.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Customize Table Headings with PowerShell

$
0
0

Summary: Learn how to create a custom table heading by using Windows PowerShell.

Hey, Scripting Guy! Question How can I use Windows PowerShell to display a table if the default property name is confusing?

Hey, Scripting Guy! Answer Use a hash table to customize the table properties. The elements of the hash table are Label and Expression (or alias’ L and E).

          In this example, I rename the ProcessName property to Name:

get-process | Format-Table @{L='name';E={$_.processname}}, id

Windows PowerShell 3.0 First Steps: Part 5

$
0
0

Summary: Microsoft Scripting Guy, Ed Wilson, shares a portion from his popular Microsoft Press book Windows PowerShell 3.0 First Steps.

Microsoft Scripting Guy, Ed Wilson, is here. Today I want to share with you another portion from my new book, Windows PowerShell 3.0 First Steps, which was recently released by Microsoft Press.

To read the previous parts of this series, see:

Storing data in text files

One of the easiest methods to store data is to store the data in a text file. In the following image, the output from the Get-Volume function displays in the Windows PowerShell console. The output formats nicely in columns, and it contains essential information about the volumes on a Windows 8 laptop.

Image of command output

Redirect and append

The easiest way to store volume information obtained from the Get-Volume function is to redirect the output to a text file. Because several lines of information return from the function, it is best to redirect and append the outputted information. The redirect and append operator is two right arrows, one behind the other with no space in between them.

The following script redirects and appends the information from the Get-Volume function to a text file that resides in the folder c:\fso. The file, VolumeInfo.txt, does not have to exist. If it does not exist, it will be created, and the information written to the file. If the file does exist, the outputted data will append to the file. The command is shown here:

Get-Volume >>c:\fso\volumeinfo.txt

When the command runs, nothing outputs to the Windows PowerShell console. The output, formatted as it appears in the Windows PowerShell console, writes to the target text file. The following image shows the volumeinfo.txt file that is created by redirecting and appending the results of the Get-Volume function from Windows 8.

Image of command output

If you run the code that redirects and appends the information from the Get-Volume function to a text file that resides in the folder c:\fso named volumeinfo.txt a second time, the information from Get-Volume writes to the bottom of the previously created text file.—that is, it appends to the file.

This is a great way to produce simple logging. The following image shows the volume information appearing twice. In both cases, the values are identical. This shows that between the first time the Get-Volume command ran and the second time the Get-Volume ran, nothing changed. 

Image of command output

This concludes my Windows PowerShell 3.0 First Steps book preview. Join me tomorrow when I will have a great post about remoting the cloud, written by Microsoft senior technical evangelist, Keith Mayer.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 


PowerTip: Control Text-File Encoding with PowerShell

$
0
0

Summary: Learn how to control text-file encoding with Windows PowerShell.

Hey, Scripting Guy! Question How can I ensure that a file is encoded with Unicode when I write to it?

Hey, Scripting Guy! Answer Use the Out-File cmdlet, and specify the appropriate value for the –Encoding parameter:

Get-Process | Out-File -FilePath c:\fso\myfile.txt -Encoding unicode

Weekend Scripter: Remoting the Cloud with Windows Azure and PowerShell

$
0
0

Summary: Microsoft senior technical evangelist, Keith Mayer, talks about remoting the cloud with Windows Azure and Windows PowerShell.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have guest blogger, Keith Mayer.

Keith Mayer is a senior technical evangelist at Microsoft focused on Windows infrastructure, data center virtualization, systems management, and the private cloud. Keith has over 20 years of experience as a technical leader of complex IT projects in diverse roles, such as network engineer, IT manager, technical instructor, and consultant. He has consulted and trained thousands of IT professionals worldwide regarding the design and implementation of enterprise technology solutions. You can find Keith online at http://KeithMayer.com.

Photo of Keith Mayer

Windows Azure Infrastructure Services provides the ability to easily provision or migrate storage, virtual machines and virtual networks onto the global Windows Azure cloud platform by using a cost-effective Pay-As-You-Go model.

In my prior Weekend Scripter post, Getting Started with Windows Azure and PowerShell, I provided an introduction to Windows Azure, and we stepped through an initial set of Windows PowerShell snippets for connecting to the cloud and provisioning new resources.

Image of promo

In this post, we’ll continue our journey into the cloud by leveraging Windows PowerShell remoting to configure the operating system and applications running inside our new Windows Azure virtual machine. We’ll step through the following tasks:

  • Installing a management certificate
  • Establishing a remote Windows PowerShell session to a virtual machine
  • Invoking remote Windows PowerShell script blocks to a virtual machine

Note To learn more about the basics of Windows Azure Infrastructure Services, you may also be interested in the our following step-by-step guides. Both are free online study resources that provide hands-on lab exercises for leveraging Windows Azure and building key IT pro cloud scenarios.

Installing a management certificate

When we provisioned our new virtual machine in the prior post, Windows Azure automatically created two default firewall endpoints that allow selective inbound network traffic from the Internet to manage it:

  • Remote Desktop Services
  • Windows PowerShell

You can view these default endpoints, and optionally define additional firewall endpoints, by signing in to the Windows Azure Management Portal and navigating to the Endpoints property page of a provisioned virtual machine.

Image of menu

Endpoints Property page of a Windows Azure virtual machine

The Windows PowerShell endpoints permit inbound Windows PowerShell remoting connections to our virtual machine from the public Internet, but these connections require authentication to maintain security. In this case, Windows PowerShell remoting uses certificates to authenticate remote connections.

When our virtual machine was provisioned, a new management certificate was also created in Windows Azure for authenticating this connection. We can see the certificate information associated with our virtual machine by using the following Windows PowerShell script:

$myService = “pslabvm01”

Get-AzureCertificate –ServiceName $myService

After running this script, you’ll see results similar to what I’ve included here:

Image of command output

Get-AzureCertificate cmdlet output

To authenticate to a remote Windows PowerShell session by using this certificate, we’ll need to first install this certificate on our local computer. To simplify the process of downloading and installing this certificate, Michael Walsham has created a Windows PowerShell script that creates a function to perform these steps.

Download this script to continue with the process:

After downloading this script, we’ll run the script to create a new function that we’ll then use to download and install the necessary management certificate for authenticating to a remote Windows PowerShell session.

Tip! To successfully run the next set of commands, confirm that you’ve launched Windows PowerShell by using the Run AsAdministrator option. In addition, because the referenced script was downloaded from the Internet, you may find that you need to adjust your Windows PowerShell policy to permit it to run locally. If needed, you can temporarily change your Windows PowerShell policy to Unrestrictedby running Set-ExecutionPolicy Unrestricted.

. .\InstallWinRMCertAzureVM.ps1

$myService = "pslabvm01"

$myVM = "pslabvm01"

InstallWinRMCertificateForVM -CloudServiceName $myService -Name $myVM

Tip! If you’ve forgotten the names that are assigned to your virtual machine and cloud service, you can use the Get-AzureVM cmdlet to retrieve these names.

If these command lines are successful, you’ll receive a message similar to the following, and then be returned to the Windows PowerShell command prompt:

Image of command output

Establish a remote PowerShell session to a virtual machine

Now that we’ve installed the management certificate needed to authenticate remote Windows PowerShell sessions, we’re ready to test establishing a connection to a virtual machine.

First, we’ll need to know the connection path to establish the remote session. We can identify the appropriate connection path using the Get-AzureWinRMUri cmdlet and store it in a variable for later use:

$uri = Get-AzureWinRMUri –Service $myService –Name $myVM

Next, we’ll need to specify the user name and password credentials for authenticating as the local Windows administrator account to the operating system running inside the virtual machine. We can use the Get-Credential cmdlet to prompt us for this information and store it in another variable for later use:

$cred = Get-Credential

We know where we’re connecting (stored in the $uri variable), and we know the credentials we’re using to authenticate to Windows (stored in the $cred variable), so now we can test the process for connecting to a remote Windows PowerShell session in the virtual machine. We’ll use the Enter-PSSession cmdlet to connect with an interactive remote Windows PowerShell session to test this process:

Enter-PSSession –ConnectionUri $uri –Credential $cred

If all is successful, after a few moments you’ll see a new remote Windows PowerShell command prompt session that is connected to the virtual machine:

Image of command output

From this remote Windows PowerShell session, you can interactively run remote Windows PowerShell script blocks. When you are finished, you can run the Exit command to return to your local Windows PowerShell session:

Exit

Invoke Remote PowerShell script blocks to a virtual machine

We’ve successfully tested the process for establishing Windows PowerShell remoting connections to a virtual machine. To leverage this new remote management capability for configuring the operating system and applications inside the virtual machine, I typically use the Invoke-Command Windows PowerShell cmdlet. Invoke-Command permits us to execute a Windows PowerShell script block in a non-interactive form that is useful when running configuration commands from within a larger script.

To use Invoke-Command, we’ll use the following syntax:

Invoke-Command –ConnectionUri $uri –Credential $cred –ScriptBlock { script block to execute remotely }

For example, to automate the install of the Web Server (IIS) role inside a virtual machine, we could use the following command:

Invoke-Command -ConnectionUri $uri -Credential $cred -ScriptBlock {Add-WindowsFeature Web-Server}

To confirm that the Web Server (IIS) role was successfully installed, we can use a similar command that remotely invokes the Get-WindowsFeaturecmdlet:

Invoke-Command -ConnectionUri $uri -Credential $cred -ScriptBlock {Get-WindowsFeature}

Of course, the script blocks could be much more complex, if needed, to install and configure several roles or applications that are required by the virtual machine. In future posts, we’ll be leveraging this base knowledge to automate the provisioning of complete cloud scenarios.

Congratulations! But keep learning!

You’ve completed the process for configuring and using Windows PowerShell remoting with a cloud-based virtual machine on Windows Azure Infrastructure Services! Now that you’ve walked through the basic steps involved in using Windows PowerShell remoting with Windows Azure Infrastructure Services, leverage these additional resources to continue your learning:

~Keith

Thank you, Keith.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

PowerTip: Use PowerShell to Find Dependent Services

$
0
0

Summary: Use Windows PowerShell to find services that depend on each other.

Hey, Scripting Guy! Question How can I use Windows PowerShell to easily find dependent services?

Hey, Scripting Guy! AnswerUse the Get-Service cmdlet and specify the service name and the DependentServices parameter:

Get-Service -Name server -DependentServices

Remoting the Implicit Way

$
0
0

Summary: Guest blogger, June Blender, talks about how to use Windows PowerShell implicit remoting.

Today we welcome June Blender, senior programming writer for Windows Azure Active Directory. Take it away, June...

Just about everyone knows how to run Windows PowerShell commands on a remote computer. You can use WMI or Windows PowerShell remoting, and the commands are very similar.

Use the ComputerName parameter of a cmdlet to run commands in a temporary session:

PS C:\> Get-Process -ComputerName Server01

Or, use the Invoke-Command cmdlet to run the command in a temporary session:

PS C:\> Invoke-Command -ComputerName Server01 {Get-PSScheduledJob}

Or, create a session on the remote computer, and then use the Invoke-Command cmdlet to run commands in the session:

PS C:\> $s = New-PSSession -ComputerName Server01

PS C:\> Invoke-Command -Session $s {Get-ScheduledJob}

Or, use the Enter-PSSession cmdlet to start an interactive session and then run the commands in the interactive session:

PS C:\> $s = New-PSSession -ComputerName Server01

PS C:\> Enter-PSSession -Session $s

[SERVER01]: PS C:\> Get-ScheduledJob

In every case, you sit at one computer and run commands on another computer. The commands get information from the remote computer and return the results to the local computer.

But there's another way to run commands on a remote computer. Look at this command sequence:

PS C:\> $s = New-PSSession -ComputerName Server01

PS C:\> Import-Module -Session $s -Name MS

PS C:\> Get-ScheduledJob

The first command creates a session on the Server01 computer. The second command imports a module from the remote session into the local session. That's nice.

But the third command simply runs a cmdlet from the imported module. There's nothing obviously remote in that command. There's no Session or PSSession parameter. There's no reference to the session in the $s variable. We haven't used the Enter-PSSession cmdlet to create an interactive session. It's simply a local command.

Wrong!

But don't feel badly. The remoting in this command is not obvious. It happens behind the scenes—implicitly. I'll show you what's going on.

Import a local module

To understand, let's start with a standard Import-Module command. The following command imports a module from the hard drive of the local computer:

PS C:\> Import-Module PSWorkflow

The Import-Module cmdlet finds the module on the local hard drive. It runs the scripts and functions in the module in the local session and adds the cmdlets, providers, workflows, CIM commands, and snippets in the module to the session.

Image of flow diagram

In Windows PowerShell 3.0, you don't need to run Import-Module commands. The modules are imported automatically when you use a command in the module, but the automatic process works just like the Import-Module cmdlet.

When the Import-Module command completes, the commands in the modules are loaded into your session. When you run a command in the module, it runs on your local computer unless you explicitly run it remotely—by using the ComputerName parameter of a command, or by using the Invoke-Command cmdlet.

Import a remote module (implicit remoting)

Unlike the previous scenario, importing a module from a remote computer does not add the commands in the module to your local session. Instead, what it adds to your session are proxy commands. The proxy commands are functions that look like local cmdlets in the session.

When you run a proxy command, instead of running the command on the local computer, the proxy runs the real command in a session on the remote computer and returns the results to the local session.

Image of flow diagram

There are some subtle differences between a locally imported module and a remotely imported module. If you run Get-Module, you might notice that PSScheduledJob is imported as a script module. If you import it locally, it's a binary module.

PS C:\> $s = New-PSSession -ComputerName Server01

PS C:\> Import-Module -PSSession $s PSWorkflow

PS C:\> Get-Module

 

ModuleType Name                ExportedCommands

---------- ----                ----------------

Manifest  Microsoft.PowerShell.Management   {Add-Computer, Add-Content, Checkpoint-Computer, Clear-Content...}

Manifest  Microsoft.PowerShell.Utility    {Add-Member, Add-Type, Clear-Variable, Compare-Object...}

Script   PSScheduledJob           {Add-JobTrigger, Disable-JobTrigger, Disable-ScheduledJob, Enable-Job...

The proxy commands look like the real commands, but they're functions, not cmdlets.

PS C:\> Get-Command -Module PSScheduledJob

 

CommandType   Name                        ModuleName

-----------   ----                        ----------

Function    Add-JobTrigger                   PSScheduledJob

Function    Disable-JobTrigger                 PSScheduledJob

Function    Disable-ScheduledJob                PSScheduledJob

Function    Enable-JobTrigger                 PSScheduledJob

Function    Enable-ScheduledJob                PSScheduledJob

Function    Get-JobTrigger                   PSScheduledJob

Function    Get-ScheduledJob                  PSScheduledJob

Function    Get-ScheduledJobOption               PSScheduledJob

Function    New-JobTrigger                   PSScheduledJob

Function    New-ScheduledJobOption               PSScheduledJob

Function    Register-ScheduledJob               PSScheduledJob

Function    Remove-JobTrigger                 PSScheduledJob

Function    Set-JobTrigger                   PSScheduledJob

Function    Set-ScheduledJob                  PSScheduledJob

Function    Set-ScheduledJobOption               PSScheduledJob

Function    Unregister-ScheduledJob              PSScheduledJob

To see the commands in any of the proxy functions, get the value of the Definition property of the function.

Following is an excerpt of the script in the Definition property of the Get-ScheduledJob proxy. You can see that it's running an Invoke-Command command, hiding the ComputerName property that is added to all remote commands, and adding the parameters and parameter values that you use to call the proxy. It's also using comment-based Help to get the Help topics from the remote session. (For more information, see about_Comment_Based_Help.)

PS C:\>(Get-Command Get-ScheduledJob).Definition

<snip/>

      $scriptCmd = { & $script:InvokeCommand `

              @clientSideParameters `

              -HideComputerName `

              -Session (Get-PSImplicitRemotingSession -CommandName 'Get-ScheduledJob') `

              -Arg ('Get-ScheduledJob', $PSBoundParameters, $positionalArguments) `

              -Script { param($name, $boundParams, $unboundParams) & $name @boundParams @unboundParams }`

             }

 

<snip/>

 

  # .ForwardHelpTargetName Get-ScheduledJob

  # .ForwardHelpCategory Cmdlet

  # .RemoteHelpRunspace PSSession

Implicit remoting in custom sessions

In practice, this "under the covers" remoting is precisely what you want. For example, if you have an Exchange server with the Exchange modules, or any type of specialized or dedicated computer with the modules for that feature, you want the commands to run on the server and get data from the server. You want to cordon off these features from other computers with other purposes. And it's a great convenience to be able to run the commands from your local administrator computer simply by importing the modules. It saves you the hassle of creating and managing sessions.

There's a security benefit, too. For example, if you're interested in a cool new Windows PowerShell module from CodePlex or GitHub, you might download it to your test computer, and then import it into your current session. You can test it from your local computer, but it runs on your test computer.

To make it even easier, you can use session configurations to create sessions that contain particular modules, and then direct users to connect to those sessions and import the modules from them. This is the strategy that the Exchange shell uses very effectively.

The "gotcha" of implicit remoting

To be very clear, you really should not use Import-Module to run commands remotely. It works best as designed—that is, when the module and the data that the module gets are both on a remote computer.

When you try to use it for general remoting, you'll run into a few easily foreseen issues. If you have imported a module from a computer, you need to remember that the commands run on the remote computer. If you run Get commands, they will get data from the remote computer. If you run Set commands, they change data on the remote computer. The commands look and feel local, but they're remote commands.

PS C:\> Import-Module -PSSession $s -Name PSScheduledJob

PS C:\> Get-ScheduledJob

 

Id     Name      JobTriggers   Command                 Enabled

--     ----      -----------   -------                 -------

1     Update-Help   1        Update-Help               True

If you don't have an Update-Help scheduled job on the local computer, or if you have many scheduled jobs that are not returned, the result might surprise you.

Another potential "gotcha" is shadowing. If you try to import a module from a remote computer when the commands of that module are already in your session, the command fails. The result looks like a warning, but the remote module is not imported into the local session.

PS C:\> Import-Module -PSSession $s -Name Microsoft.PowerShell.Utility

WARNING: The 'Microsoft.PowerShell.Utility' module was not imported because the 'Microsoft.PowerShell.Utility' snap-in was already imported.

And the Force parameter will not help you.

PS C:\> Import-module -PSSession $s -Name Microsoft.PowerShell.Utility -Force

WARNING: The 'Microsoft.PowerShell.Utility' module was not imported because the 'Microsoft.PowerShell.Utility' snap-in was already imported.

You can import a module from a remote computer if the module is installed, but not imported into the session. For example, if the local computer has the PSScheduledJob module, but it's not in the session, you can import it remotely. When you run commands in the module, they run on the remote computer.

PS C:\> Import-Module -PSSession $s -Name PSScheduledJob

PS C:\> Get-ScheduledJob

 

Id     Name      JobTriggers   Command                 Enabled

--     ----      -----------   -------                 -------

1     Update-Help   1        Update-Help               True

You can still import the local version of the same module into your session. The command doesn't fail, because the cmdlets in the local module don't override the functions from the remote version of the module.

PS C:\> Import-Module PSScheduledJob

PS C:\>

Now, you have two modules with the same name—one remote script module and one local binary module.

PS C:\ps-test> Get-Module PSScheduledJob

 

ModuleType Name                ExportedCommands

---------- ----                ----------------

Binary   PSScheduledJob           {Add-JobTrigger, Disable-JobTrigger, Disable-ScheduledJob, Enable-Job...

Script   PSScheduledJob           {Add-JobTrigger, Disable-JobTrigger, Disable-ScheduledJob, Enable-Job...

And, for each command in the module, you have a cmdlet and a (proxy command) function.

PS C:\ps-test> Get-Command Get-ScheduledJob -Module PSScheduledJob

 

CommandType   Name                        ModuleName

-----------   ----                        ----------

Function    Get-ScheduledJob                  PSScheduledJob

Cmdlet     Get-ScheduledJob                  PSScheduledJob

Because functions take precedence over cmdlets in Windows PowerShell, if you run the command, the proxy command function from the remote module runs. (For more information, see about_Command_Precedence.)

You can test this premise by running Get-Command.

PS C:\ps-test> Get-Command Get-ScheduledJob

 

CommandType   Name                        ModuleName

-----------   ----                        ----------

Function    Get-ScheduledJob                  PSScheduledJob

You can run a command from the local module. The module-qualified name of the cmdlet does not help, because the modules have the same name. But you can use the command type to distinguish the commands.

PS C:\> &(Get-Command Get-ScheduledJob -Module PSScheduledJob -CommandType Cmdlet)

However, this trick does not work when the module exports a function.

Using implicit remoting to manage non-Windows computers

I've saved the best part for last. In Windows PowerShell 3.0, you can create a CIM session on a computer that does not have Windows PowerShell or does not have Windows PowerShell remoting enabled. You can even create a CIM session on a computer that is not running Windows if it is standards-based and WMI-compatible.

After you have a CIM session, you can use the CIMSession parameter of Import-Module to import CIM modules from the remote computer to the local computer. When you run the commands from the module in the CIM session, it gets and sets data on the remote computer.

This is really a different topic, but you can see the potential.

~June

Thank you, June!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

PowerTip: Use PowerShell to Run a Command on a Remote Server

$
0
0

Summary: Learn how to use Windows PowerShell to run a command on a remote server.

Hey, Scripting Guy! Question How can I run a command on a remote server by using Windows PowerShell Remoting?

Hey, Scripting Guy! Answer Use the Invoke-Command cmdlet, specify the computer name, and place the command in a script block:

Invoke-Command -ComputerName server1 -ScriptBlock {hostname}

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>