Quantcast
Channel: Hey, Scripting Guy! Blog
Viewing all 3333 articles
Browse latest View live

BATCHman Writes a PowerShell Script to Automate Handle

$
0
0

Summary: Windows PowerShell superhero BATCHman writes a script to automate the Sysinternals Handle tool.

Microsoft Scripting Guy Ed Wilson here. Today, we continue the BATCHman series as the titular hero battles Tapeworm.

 

 

When the digital crash
In a blink and a splash
A gleam in the night
To make all wrongs right
Our heroes fly out
And there is no doubt
That evil will fall true
At the sight of the blue

The one and only .NET Duo—BATCHman and Cmdlet!

 

When last we saw our heroes, Tapeworm had taken control of the files in the Redmond Pocket Protector Recycling Plant, paralyzing the backup system. BATCHman and Cmdlet were working on a solution to automate Handle.exe. They were close to a solution as Cmdlet had discovered the output of Handle.exe was an object that could be manipulated in Windows PowerShell, an object that Select-String could work with.

No sooner were they close to working on a solution than the alarm system was blaring on the WinMobile.

Tearing down the stairs and bursting through the front door were BATCHman and Cmdlet. The sight that met their eyes was too much.

The WinMobile was getting a parking ticket. Cmdlet looked up.

“I thought you were taking your 40-speed carbon fiber, midnight blue special to the crime scene. ‘We’re a Green company,’ you said.”

BATCHman coughed. “Well, I did. From the top of the stairs to the WinMobile. I do after all have to arrive in style! I am…BATCHMAN!”

Cmdlet snickered. “Style apparently didn’t include the two dollars in quarters for the parking meter.”

BATCHman quickly dumped his change into the meter and collected his “prize,” a $140 ticket for an unpaid meter and for taking up two parking spots. “Must look into running compression on the WinMobile,” he mused.

Running back up the stairs, they headed back to the task: automating Handle.exe.

“So, Cmdlet, let’s see what we have now. With the following sequence in Windows PowerShell, we can grab the output from Handle.exe when searching for open DOCX files.”

$ScreenOutput=.\HANDLE.EXE DOCX

“And then with the following Select-String statements, identify the parts of the output that contain the text with ProcessID and FileHandle.”

$ProcessIDResults=$ScreenOutput | SELECT-STRING –pattern ‘pid: [\w]*’
$FileHandleResults=$ScreenOutput | SELECT-STRING –pattern ‘File [s\S]*?:’

“With this output within Select-String, we found there was also an index and length of the output to work with for each match.”

Cmdlet thought. “So what we need now is to find a way to pull the information out of these matches. So maybe we should look at what data we now have. We know where pid: and type: File exist because of the properties in Matches.

BATCHman could see the little hamster running in the wheel in Cmdlet’s brain. “Continue…”

“So we can use substring() and pull out based upon the index and length of that content’s position in the string.” Cmdlet quickly typed a line in Windows PowerShell.

$ProcessIDResults[0].tostring().substring(19,9)

They looked at the screen as a result appeared.

pid: 8520

“Cmdlet, excellent job. Now we need to this via the properties instead, and for both values.” BATCHman quickly took over.

$ProcessIDIndex=$ProcessIDResults[0].matches[0].Index
$ProcessIDLength=$ProcessIDResults[0].matches[0].Length

$ProcessIDResults[0].tostring().substring($ProcessIDIndex,$ProcessIDLength)

$FileHandleIndex=$FileHandleResults[0].matches[0].Index
$FileHandleLength=$FileHandleResults[0].matches[0].Length

$FileHandleResults[0].tostring().substring($FileHandleIndex,$FileHandleLength)

BATCHman looked at the output on the screen.

pid: 8520

File 164:

“Now we need to clean this up,” mumbled BATCHman as he scratched his chin. “We’ll need to skip the first four characters of the pid: by adjusting the starting point in the substring and the length appropriately. We will skip File because it’s only four characters long. But for the FileHandle, we’ll drop an extra character to lose that colon on the end.”

$ProcessIDResults[0].tostring().substring($ProcessIDIndex+4,$ProcessIDLength-4)
$FileHandleResults[0].tostring().substring($FileHandleIndex+4,$FileHandleLength-5)

8520

164

“Now we’ve just got to store it as a variable and tack on a trim() method to remove any extraneous spaces leading or trailing.”

$ProcessID=$ProcessIDResults[0].tostring().substring($ProcessIDIndex+4,$ProcessIDLength-4).trim()
$FileHandle=$FileHandleResults[0].tostring().substring($FileHandleIndex+4,$FileHandleLength-5).trim()

Cmdlet looked and realized that and this point they could automate the application. “So, BATCHman, we could just simply at this point step through the results with a Foreach statement and call up Handle.exe!”

BATCHman quickly typed a line to verify this would work on the DOCX files.

$TotalResults=$ProcessIdResults.count

For ($Counter=0; $Counter –lt $TotalResults; $Counter++)

{

$ProcessIDIndex=$ProcessIDResults[$Counter].matches[0].Index
$ProcessIDLength=$ProcessIDResults[$Counter].matches[0].Length

$FileHandleIndex=$FileHandleResults[$Counter].matches[0].Index
$FileHandleLength=$FileHandleResults[$Counter].matches[0].Length

$ProcessID=$ProcessIDResults[$Counter].tostring().substring($ProcessIDIndex+4,$ProcessIDLength-4).trim()
$FileHandle=$FileHandleResults[$Counter].tostring().substring($FileHandleIndex+4,$FileHandleLength-5).trim()

(& '.\Handle Program\handle.exe' -c $FileHandle -p $ProcessID -y)

}

Smiling, BATCHman noted as files closed on the screen. He pressed a Ctrl+C to stop the process.

“Now, Cmdlet, all we need to do to get this automated and running is turn this into a function so that Jane can run her backups properly and these old pocket protectors can get back to being recycled.”

BATCHman quickly rewrote it into a single Windows PowerShell script with a function to specify types of files to close that Jane could run automatically on the file server.

function global:close-file ($name) {
$ScreenOutput=(& '.\Handle\handle.exe' $name)
$ProcessIDResults=$ScreenOutput | SELECT-STRING –pattern ‘pid: [\w]*’
$FileHandleResults=$ScreenOutput | SELECT-STRING –pattern ‘File [s\S]*?:’

$TotalResults=$ProcessIdResults.count

For ($Counter=0; $Counter –lt $TotalResults; $Counter++)

{

$ProcessIDIndex=$ProcessIDResults[$Counter].matches[0].Index
$ProcessIDLength=$ProcessIDResults[$Counter].matches[0].Length

$FileHandleIndex=$FileHandleResults[$Counter].matches[0].Index
$FileHandleLength=$FileHandleResults[$Counter].matches[0].Length

$ProcessID=$ProcessIDResults[$Counter].tostring().substring($ProcessIDIndex+4,$ProcessIDLength-4).trim()
$FileHandle=$FileHandleResults[$Counter].tostring().substring($FileHandleIndex+4,$FileHandleLength-5).trim()

(& '.\Handle Program\handle.exe' -c $FileHandle -p $ProcessID -y)

}

CLOSE-HANDLE DOCX
CLOSE-HANDLE XLSX
CLOSE-HANDLE ACCDB
CLOSE-HANDLE PPTX

BATCHman handed the script to Jane and scheduled it to run every 10 minutes to thwart Tapeworm’s efforts. In moments, there was a shriek from the basement! “AIAGHIAGHI!!!! Curse you! Curse you!”

“Jane, quickly, let’s trigger that backup now!” commanded BATCHman.

While the backup ran, BATCHman and Cmdlet followed the shrieking sounds of Tapeworm to his hideaway, a forgotten IT storage room nicknamed the Pit of Eternal Sorrow.

“A-ha! We have you now Tapeworm! What have you got to say for yourself?”

“Bah! Curses! I would have gotten away with it if hadn’t been for you meddling kids and Windows PowerShell! BAAAAAHAHH!!!” he cursed as BATCHman hauled him off. Scooby Doo was nowhere to be found.

 

I want to thank Sean for another exciting episode of BATCHman. Join us tomorrow for the exciting conclusion to the BATCHman series.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 


Restore NTFS Security Permissions by Using PowerShell

$
0
0

Summary: Superhero BATCHman restores NTFS security permissions using Windows PowerShell.

 

Microsoft Scripting Guy Ed Wilson here. Well, it is time once again for BATCHman. This time, it is Episode 12. Take it away, Sean.

When the digital crash
In a blink and a splash
A gleam in the night
To make all wrongs right
Our heroes fly out
And there is no doubt
That evil will fall true
At the sight of the blue

The one and only .NET duo—BATCHman and Cmdlet.

 

It’s a nice morning. BATCHman is playing with a new test module he has been encoding to stop criminals directly with Windows PowerShell. “BATCHpause” is what he is going to call it (patent pending patent pending). Cmdlet has just walked in from his early morning session of watching Electric Company reruns when off in the corner a large blue phone begins flashing.

“It’s the BATCHphone!” BATCHman leaps through the air pommelling poor Cmdlet in the process just to answer it.

Cmdlets staggers to his feet. “Mmmmmphphh…you know…*ugh*…there was an extension right beside you? *urrgghhh*”

“Wouldn’t have been as dramatic!” BATCHman responded as he picked up the phone. “Hello, BATCHman here!”

“BATCHman! I need your help now.” It was the principal at Redmond High School “Nobody can access their home drives! Little Johnny was trying to “help” the school administrator and somehow locked us out of our personal drives! Help us BATCHman!”

BATCHman knew only ONE thing could stop a problem like this, and only one duo. Quickly tossing Cmdlet an ice pack and a box of aspirin, he began to announce, “Cmdlet to the…”

Cmdlet staggered to his feet, ice pack in hand. “Yes, to the WinMobile. I know, I know”

“Actually, my good chum, I was going to use the new vehicle, the BATCHmobile. It’s powered by WP7 with Mango! The WinMobile needed an update.”

“Cooool!” burst out Cmdlet, forgetting his concussion. Leaping into the air, he landed on the new BATCHmobile, doing a triple flip as he landed.

“New Gorilla Glass,” muttered BATCHman. “Very smooth,” as they fired off into the city.

Moments later they arrive upon a scene that was absolute chaos.

“I can’t access my homework!”

“My locker combination is locked away from me!”

“I can’t see who I was supposed to have on detention!”

The entire high school had been turned into a zoo! (Or more of a zoo than usual anyway.)

The principal, disheveled and worn, is marching down the hall with scraggly-haired youth wearing a combination lock about his neck, and held by the collar, presenting him to the BATCHman.

“THIS! THIS caused all of the trouble! Little Johnny Loc…”

“Stand back, puny creature!” it barked out “I am the Locksmith, the newest member of the V.I.T. program. The world will bow before me! Buah ha ha ha haaaa!”

BATCHman scratched his head. “V.I.T.?”

“’Villains In Training!” announced The Locksmith. “Of course, I’m the first, but there will be more!” and he waved his “Official VIT” card written in crayon, feigning his invisible and useless villainy powers to the world.

Cmdlet shook his head. “A waste of good crayon, if you ask me.”

Undeterred BATCHman pressed on. “So no access to home folders? Please. Lock our little friend here into detention for the moment while we examine the issue.”

Accessing a server with the home folder structure, our hero checked the folders but found he could access them. “Hmmmm,” a puzzled look crossed over his face. He ran the following command on a user’s folder the check the NTFS rights.

$SecurityRights=GET-ACL E:\Home\Student542

Cmdlet looked over “Whatcha’ doin’?

BATCHman nearly forgot his experience in Windows PowerShell was a little more advanced. “I am using Get-ACL on a folder to see what the NTFS rights are. I suspect our little fiend has removed the access to the individual users.”

Scratching his eyeball, the sidekick wondered. “How did you guess that so easily?”

“Well,” the Tilley-hatted one continued, “usually if the user cannot access the files but the administrator can, it’s a rights problem. We can verify this by running Get-Member against the $SecurityRights object we just created to see the available methods.”

Image of available methods

“I see near the very top that there is a property called Access, so if I type this, I can see the various groups able to access this object.”

$SecurityRights.Access

Practicing origami on his arms, Cmdlet stretched and then blurted out. “Hang on one sec. How did you know to use that and without browsing, how did you come to that conclusion?”

“In answer to your first question, I could say I used msdn.com and pulled up details on SYSTEM.SECURITY.ACCESSCONTROL, but the honest answer is I just plugged in values in the BATCHcave to learn what they might contain.”

Reasonable, thought Cmdlet. You can use research but sometimes just “playing about” produces the answer. “And the second?”

“There’s a note here on the desk: ‘I can browse these folders as administrator. !?#$?!%%$ Why isn’t anything working? ARGGGGHHH!!! $$#@@@?’”

“And they bothered to write the asterisk and punctuation marks?”

“Meh,” shrugged BATCHman. “So if you see the list presented on this folder under the HOME$ share called STUDENT542, you’ll see it is in fact missing access for the student.”

$SecurityRights.Access

“Now what I need to see is the normal permissions assigned when a home folder is created on this system, to see what’s missing exactly on a normal home drive for NTFS permissions.”

BATCHman created a temporary student in Active Directory with an appropriate 752-character long password. He then applied the settings for a home folder and noted the differences when running Get-ACL again on the new home folder.

$NewUserSecurity=GET-ACL C:\Home\Student160

$NewUserSecurity.Access

“Aha! As I suspected, we merely need to add the rights that a UserID would normally have on its own home drive. Now, I just need to find a script to do this.” BATCHman quickly queries the TechNet Script Center.

Cmdlet paused “You don’t know everything? But you’re…you’re BATCHman!” his jaw dropped to the ground like a child being told he couldn’t play Kinect for another three hours.

“True little buddy. I am skilled, but sometimes the ability to say ‘I don’t know’ is an even greater power. In these cases, I do know how to leverage many online resources for Wndows PowerShell such as the Hey, Scripting Guy! Blog, online MSDN articles, and community support.”

BATCHman searched and found something on the Script Repository.

“Aha! The power of online resources!” he exclaimed referencing a Windows PowerShell Tip of the Week on the TechNet Script Center.

Reading the instructions, he defined the various variables needed, explaining it all to Cmdlet as he went along.

“So our challenge is we must create a .NET object for access control. It comprises several additional objects. All of them are represented in the view of any of the access lists we just saw. We’ll use one of the access rules that is missing.”

“We start by getting the permissions on the folder in question.”

$Access=GET-ACL E:\Home\Student169

“First, we will specify the type of access for the file or folder.”

$FileSystemRights=[System.Security.AccessControl.FileSystemRights]"FullControl"

“Next, we are setting whether this right is allowed or denied.”

$AccessControlType =[System.Security.AccessControl.AccessControlType]"Allow"

“Next, we’re going to specify whether particular access control will inherit rights and how these rights propagate to child objects in a directory or files.”

$InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"None"
$PropagationFlags = [System.Security.AccessControl.PropagationFlags]"None"

“Finally we need to specify the user or group they will apply to. In our case, this will be a UserID in Active Directory. It will be in the format of “REDMONDHD\UserId.”

$IdentityReference=’REDMONDHS\STUDENT169”

“Sounds like you’ve encountered this once before BATCHman,” Cmdlet noted.

He shrugged. “I goofed one day. Before I was the BATCHman I once worked in a large network. I made a mistake. I learned,” he said grinning sheepishly.

“Next we build the FileSystemAccessRule object.”

$FileSystemAccessRule= = New-Object System.Security.AccessControl.FileSystemAccessRule ($IdentityReference, $FileSystemrRights, $InheritanceFlags, $PropagationFlags, $AccessControlType)

“Then finally we set the new Access rule in place.”

$Access.AddAccessRule($FileSystemAccessRule)

SET-ACL E:\HOME\Student169 $Access

Cmdlet thought. “So it looks like we’ll need to run a second Set-ACL because we have two permissions I see missing. Can I try setting those variables? I think I see the pattern they have to follow.”

Cmdlet looked at the other permission that would have been missing.

$FileSystemRights=[System.Security.AccessControl.FileSystemRights]"268435456"
$AccessControlType =[System.Security.AccessControl.AccessControlType]"Allow"
$InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
$PropagationFlags = [System.Security.AccessControl.PropagationFlags]"InheritOnly"
$IdentityReference=’REDMONDHS\STUDENT169”

“Dead on the nose, Cmdlet! Now, quick! We have the answer. Because each folder name is the same as a UserID, we can leverage that. We’ll use Get-ChildItem, grab the FolderName, and build the username from that, applying the security as we go along.”

$HomeFolders=GET-CHILDITEM E:\HOME

Foreach ( $Folder in $HomeFolders )
{
$Username=’REDMONDHS\’+$Folder.Name

$Access=GET-ACL $Folder

$FileSystemRights=[System.Security.AccessControl.FileSystemRights]"268435456"
$AccessControlType =[System.Security.AccessControl.AccessControlType]"Allow"
$InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
$PropagationFlags = [System.Security.AccessControl.PropagationFlags]"InheritOnly"
$IdentityReference=$Username

$FileSystemAccessRule= = New-Object System.Security.AccessControl.FileSystemAccessRule ($IdentityReference, $FileSystemrRights, $InheritanceFlags, $PropagationFlags, $AccessControlType)

$Access.AddAccessRule($FileSystemAccessRule)

SET-ACL $Folder $Access

$FileSystemRights=[System.Security.AccessControl.FileSystemRights]"FullControl"
$AccessControlType =[System.Security.AccessControl.AccessControlType]"Allow"
$InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"None"
$PropagationFlags = [System.Security.AccessControl.PropagationFlags]"None"

$FileSystemAccessRule= = New-Object System.Security.AccessControl.FileSystemAccessRule ($IdentityReference, $FileSystemrRights, $InheritanceFlags, $PropagationFlags, $AccessControlType)

$Access.AddAccessRule($FileSystemAccessRule)

SET-ACL $Folder $Access

}

 

They quickly ran the script on the thousands of user folders on the server. Moments later, peace was restored. Flustered and happy, the principal quickly thanked BATCHman and Cmdlet. Too busy with a zoo full of kids, he had to head straight back to work.

Moments later, they were back in the BATCHcave. A long month of crime fighting, BATCHman sat back to relax. Cmdlet was playing on the computer with Windows PowerShell and a new strange toy.

“Hey, little buddy, that looks cool! What are you playing with?”

“A used teleporter I picked up on eBay. It came with a very early alpha version of a Windows PowerShell module. It’s pretty cool and I…”

BATCHman had only just looked over Cmdlet’s shoulder when he burst out “No. You didn’t—wait…”, seeing the one-liner before they vanished into the night.

GET-PERSON –name ‘BATCHman’,’Cmdlet’ | INVOKE-TELEPORT –random

 

BATCHman and Cmdlet have left the building. I want to thank Sean Kearney for writing a tremendous series of fun, entertaining, and educational articles. Tomorrow, I begin a week of random topics that should be quite interesting.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Avoid Blank Lines at End of a Text File with PowerShell

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson teaches how to avoid writing blank lines at the end of a file by using Windows PowerShell.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have a problem with my script. For example, in my script, I build up a variable. Then when I write the variable to a file, I always end up with an extra blank line at the end of the text file. Though many times this is not an issue, it is, on occasion, extremely annoying. These are the times when I am creating a file that will be used to drive another script. For example, if I have a script that collects the names of computers that I need to connect to, and I then write those computer names to a text file so that I can use it with other scripts, I always end up with a blank line at the end of the file. This causes the subsequent scripts to always fail to connect to the blank line, and it therefore causes errors. So far, I have been dealing with this problem by doing a bit of error checking, but that is cumbersome, and it would be easier to avoid the problem all together. You are a wizard when it comes to this sort of stuff, so I know you will be able to solve the problem with something elegant. I remain your biggest fan,

—JN

 

Hey, Scripting Guy! AnswerHello JN,

Microsoft Scripting Guy Ed Wilson here. Wow, you are my biggest fan! Really? I mean, like the president of the Official Scripting Guys Fan Club? Cool! (Note: One advantage of being a member of the Official Scripting Guys Fan Club is that you do not have to wear round mouselike ear hats. Of course, there is nothing inherently wrong with wearing round mouselike ear hats.)

JN, your problem is that you end up with an extra blank line at the bottom of your text file or at the end of your variable. This problem is shown in the following code?

$count = "count"

for ($i = 0; $i -le 4; $i++)

{

 $count += "`r`n" + $i

}

$count > c:\fso\count.txt ; c:\fso\count.txt

When the preceding script runs, the text file contains an additional line at the end of the file. This is shown in the following figure.

Image of text file with additional blank line at end of file

One way to solve this problem is to use.NET Framework classes, instead of using redirection arrows or the Out-File cmdlet. To make the change to using the .NET Framework class, you need to make a minimal change to the previous code. The class that I need to use is found in the System.IO namespace, and it is the File class. Therefore, the complete file name is System.IO.File. This class contains a static method called WriteAllText. When writing code, it is permissible to leave off the system portion of the namespace name, because system is the root .NET Framework namespace, and it is assumed that IO would live under the System portion. When I write a script, I will generally use the complete namespace and class name; when working interactively in the Windows PowerShell console, I will generally use the shorter version of the name, and leave off the system portion of the name. The modified script that will not produce an extra blank line at the end of the file is shown here:

$count = "count"

for ($i = 0; $i -le 4; $i++)

{

 $count += "`r`n" + $i

}

 

[system.io.file]::WriteAllText("C:\fso\io.txt", $count)

 

The text file created by the preceding code is shown in the following figure.

Image of text file with no extra blank line

The System.IO.File .NET Framework class documentation is found on MSDN, but I did not need to consult it while writing the above code, not because I have everything memorized, but because I can easily use the Get-Member cmdlet to retrieve information about the WriteAllText static method. There are two ways to use the WriteAllText method. The first way is the way I used in my script: provide the path for the output file, and then provide the text. Here is the line in my script where I did just that:

[system.io.file]::WriteAllText("C:\fso\io.txt", $count)

The double colon indicates that I am calling a static method (one that is always available and does not require an instance of the class upon which to work). The first parameter is the path to the file I want to create. The c:\fso\io.txt path refers to a folder (c:\fso) on my local computer. The content from the script is stored in the $count variable, and this is the text that I write to the file.

The second way to use the WriteAllText static method from the file class in the System.IO namespace is to provide the path, the contents, and the encoding to the method call. The following figure shows the output from the Get-Member cmdlet.

Image of output from Get-Member cmdlet

To supply the encoding value to the WriteAllText static method, the third position needs an instance of the System.Text.Encoding enumeration value. The encoding enumerations are all contatined within the System.Text.Encoding class as static properties. It is easy to retrieve them by using the Get-Member cmdlet as shown here:

PS C:\Users\edwils> [system.text.encoding] | get-member -Static -MemberType property

 

   TypeName: System.Text.Encoding

 

Name                           MemberType                 Definition                                        

ASCII                             Property                        static System.Text.Encoding ASCII {get;}          

BigEndianUnicode          Property                        static System.Text.Encoding BigEndianUnicode {get;}

Default                          Property                        static System.Text.Encoding Default {get;}        

Unicode                        Property                        static System.Text.Encoding Unicode {get;}        

UTF32                           Property                        static System.Text.Encoding UTF32 {get;}          

UTF7                             Property                        static System.Text.Encoding UTF7 {get;}           

UTF8                             Property                        static System.Text.Encoding UTF8 {get;}         

 

Armed with this information, I can revise the script so that I output an ASCII-encoded file. This is shown here:

$count = "count"

for ($i = 0; $i -le 4; $i++)

{

 $count += "`r`n" + $i

}

 

[system.io.file]::WriteAllText("C:\fso\ioascii.txt", $count,[text.encoding]::ascii)

By adding the third parameter, it is the last line that is modified.

 

Well, JN, that is all about there is to writing to a text file, and ensuring that the output file does not contain any additional spaces or blank lines at the end of the file. Join me tomorrow for more exciting Windows PowerShell goodness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Solve Problems with External Command Lines in PowerShell

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson discusses problems creating external command arguments using Windows PowerShell.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am using an external program that takes a –o command-line parameter followed by a path location. The program permits no space between the –o parameter and the supplied path. For example, Windows PowerShell sees an unclosed quote when I type the following command:

 .\7za.exe x -o"C:\TARGET\P F" C:\TEMP\ProgramFiles.zip

I need to escape the second (but not the first) double-quote for Windows PowerShell to parse the line correctly. Here is the revised command, with the escape character marked in yellow:

 .\7za.exe x -o"C:\TARGET\P F`" C:\TEMP\ProgramFiles.zip

This command looks strange, and I’m confused as to why this is. 

—AM

 

Hey, Scripting Guy! AnswerHello AM,

Microsoft Scripting Guy Ed Wilson here. One of the things I do not enjoy about Windows PowerShell is the problem inherent in attempting to create a workable command line for complex external utilities. On occasion, I have spent several hours attempting to derive a workable command line for a single program. It is not just a Windows PowerShell problem; I have spent several hours attempting to figure out acceptable syntax for command-line utilities in the old command-prompt days as well. A quick survey of some of my friends reveals it is not just me.

Windows PowerShell exacerbates the problem of command-line argument parsing because Windows PowerShell has its own syntax parser that must run before it passes control over to the external program.

One thing to keep in mind is that external programs, such as 7-Zip can define their own rules for parsing arguments – and this includes their own quoting rules and escape characters. (In fact, 7-Zip defines its own rules for interpreting wildcard characters as well.) To make matters worse, at times we expect Windows PowerShell to execute commands, parse Windows PowerShell paths, update variables, and a host of other things before actually passing the resulting hodgepodge to the external utility for parsing.

Not all is lost. A couple of things can help make sense on the Windows PowerShell side. One of my favorite tools comes with the PowerShell Community Extension Project (PSCX). I have written several times about the PSCX, and recently, I helped the Scripting Wife add the command to import the PSCX into her Windows PowerShell profile. One tool from that project that is useful to me when dealing with complex command lines is the EchoArgs.exe program. When the PSCX is installed, the installer copies the EchoArgs.exe utility to the installation directory. The tool is extremely easy to use. A quick example illustrates how to use EchoArgs.exe to see how Windows PowerShell will parse a command line.

Suppose I want to use the nbtstat.exe program, but I am not sure how Windows PowerShell will parse the command line, and determine the arguments and the values supplied to the arguments to the command. The command line is shown here:

nbtstat -S 2

To use the EchoArgs.exe program, I simply copy the arguments from the command line and paste them after the EchoArgs program. This is shown here, along with the associated output:

PS C:\> EchoArgs.exe -S 2

Arg 0 is <-S>

Arg 1 is <2>

In the preceding example, it is clear that Windows PowerShell sees the first argument as –S and the second argument as 2.

If I have the situation when a value needs to be calculated and I am not certain how Windows PowerShell will handle the argument, I can use EchoArgs to demystify the command line as well. This is shown here where I assign a value of 5 to the $a variable, and then perform a calculation to compute the value I wish to pass to the –S argument:

PS C:\> $a = 5

PS C:\> EchoArgs.exe -S ($a-3)

Arg 0 is <-S>

Arg 1 is <2>

Based upon the preceding, I feel confident that the following command would work (and it does):

PS C:\> $a = 5

PS C:\> nbtstat -S ($a-3)

AM, the figure that follows illustrates using the EchoArgs utility to parse the arguments from the command line you supplied.

Image of using EchoArgs to parse arguments from command line

As you can see, the EchoArgs utility is pretty slick and easy to use. Another way to see how Windows PowerShell will parse a command line is to ask it. That is right—by using the Windows PowerShell tokenizer (I wrote several articles about using the tokenizer), it is possible to see exactly how Windows PowerShell will interpret a command line.

To begin with, I will parse the same easy nbtstat command I used with the EchoArgs utility. This will allow for a comparison between the two methods. The following command uses the tokenize static method from the psparser .NET Framework class. The first parameter is the command to parse. The [ref]$null is a requirement of the Tokenize command.

[management.automation.psparser]::Tokenize('nbtstat -S 3', [ref]$null)

The command and associated output are shown in the following figure.

Image of command and associated output

The output from the tokenize static method is rather extensive. The good thing is that it breaks everything into its role within the command—and not simply arg 0 and arg 1 as returned from the EchoArgs utility. The downside is that it is a bit more typing.

But wait! This is Windows PowerShell, and a quick function can save lots of typing. I created the Get-Args function to make it easy to use the tokenize static method. The complete Get-Args function is shown here:

Function Get-Args

{

 Param(

  [string]$command

 )

 [management.automation.psparser]::Tokenize($command,[ref]$null)

} #end function Get-Args

To use the Get-Args function, I run the function to load it into memory, and then I supply the command line to the function. This technique is shown in the following figure.

Image of running function and supplying command line to function

When I call the function, I pass the entire command inside of literal quotation marks. The output is the same as the output received earlier when I used the tokenize method directly. The command and associated output are shown in the following figure.

Image of command and associated output

One cool thing is that I can parse your original command line, the one that was interpreted as incomplete, by using this method. As shown in the following figure, the Get-Args function easily parses the command. The other thing I like about this method is that I do not have to separate the arguments from the command. I simply copy the entire line.

Image of Get-Args function easily parsing command

Well, AM, I hope this discussion helps you solve the mystery of the funky command line. I invite you to join me tomorrow for more cool Windows PowerShell stuff.

 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Two Simple PowerShell Methods to Remove the Last Letter of a String

$
0
0

Summary: Learn two simple Windows PowerShell methods to remove the last letter of a string.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have, what should be a simple question. I need to be able to remove only the last character from a string. For example, if I have a string the scripts, I want the string to be the script instead. If I have the number 12345, I want the number to be 1234. I have tried everything, including split, trim, and other commands, but nothing seems to work. I thought that substring would work, but it requires me to know the length of the string, and that is something I will not always know. Can you help me?

—DM

 

Hey, Scripting Guy! AnswerHello DM,

Microsoft Scripting Guy Ed Wilson here. In the VBScript days, I used to hate string manipulation. It was not that the power of VBScript’s string manipulation language was so weak; it was simply that I did not like to monkey around with strings. Therefore, one day, I spent an entire day doing string manipulation. After that, I was a lot better, and I did not hate it so much. The reason I did not like string manipulation was that I was not that good at it. In Windows PowerShell, string manipulation is still string manipulation. The tools and the commands are similar to what I had in the VBScript days, and indeed to what is available in other languages.

Nearly everything has a concept of split, trim, substring, and other similar-sounding commands. The commands may not be called exactly the same thing, but they will offer similar functionality. In Windows PowerShell, we are actually using the methods from the string class that is found in the System .NET Framework namespace. This is good news, because the string class is very robust and complete. If you long for the actual commands used back in the VBScript days, those are available via the Microsoft.VisualBasic namespace.

DM, you mentioned the need to know the length of a string. As a matter of a fact, all strings have a length property associated with them. To find the properties of a string, I can use the Get-Member cmdlet:

“a string” | get-member –membertype property

The command and associated output are shown in the following figure.

Image of command and associated output

Because I have found a length property on the string class, I can use it to obtain the length of a string. If I store a string in a variable, I can access the length property directly, as shown here:

PS C:\> $string = "the scripts"

PS C:\> $string.Length

11

One thing to keep in mind when working with the length property is that it reports the actual length of the string. That is, it is not zero based. The following code illustrates this:

PS C:\> "123456789".length

9

Though it is common to store a string in a variable and use dotted notation to retrieve the length property, the length property is also directly accessible from a string. Tab expansion does not pick it up, but it is available nonetheless. This technique is shown here:

PS C:\> "The scripts".length

11

The preceding commands and associated output are shown in the following figure.

Image of preceding commands and associated output

DM, there are two ways to use the substring method from the string class. The first way to use the substring method defines the start location, and the method returns the remainder of the string. The second way is to specify the start location and to tell the substring method how many letters to return. I found this information by using the Get-Member cmdlet, and choosing the definition of the substring method. The command and returned information are shown here:

PS C:\> $string | gm substring | select definition

 

Definition

----------

string Substring(int startIndex), string Substring(int startIndex, int length)

 

First, I want to show what happens when I specify a value for only the startindex parameter of the substring method. In the first example, I tell substring to begin at position 1 (the letter t in the string the scripts) and return the remainder of the string. The command and output are shown here:

PS C:\> $string = "the scripts"

PS C:\> $string.Substring(1)

he scripts

 

In the second command, I tell the substring method to begin at position 10 and to return the remainder of the string. Because the string is 11 characters long, the command only returns the letter s. The command and associated output are shown here:

PS C:\> $string.Substring(10)

s

PS C:\>

Because the length property always exists on a string, I can use it with the substring method. This is useful with the second position of the command, because I do not always know how long the string is. To begin with the first position of the string, I need to specify location 0 (remember when I used location 1, the first character returned was the letter h). If I simply use the length property, I will return the entire string. This is shown here:

PS C:\> $string = "the scripts"

PS C:\> $string.Substring(0,$string.Length)

DM, you need to return all of a string but the last letter. To do this, I simply subtract 1 from the length of the string, as shown here:

PS C:\> $string = "the scripts"

PS C:\> $string.Substring(0,$string.Length-1)

the script

This is simply displaying the string minus the last letter of the string. To actually remove the last letter from the string, it is necessary to write the results back to the $string variable. This technique is shown here:

PS C:\> $string = "the scripts"

PS C:\> $string = $string.Substring(0,$string.Length-1)

PS C:\> $string

the script

PS C:\>

Another way to accomplish this, which might actually be a bit easier to do, is to use the replace operator and not supply a replacement value. The replace operator will accept regular expressions. A simple regular expression of “.$” will match the last character in a string. The dollar sign means match from the end of the string, and a period (or dot) means any single character. Therefore, the following command will replace the last character with nothing, and effectively remove the last letter of the string:

PS C:\> $string = "the scripts"

PS C:\> $string -replace ".$"

the script 

Once again, if I want to actually remove one character from the end of a string, I need to write the returned string back to the $string variable, as shown here:

PS C:\> $string = "the scripts"

PS C:\> $string = $string -replace ".$"

PS C:\> $string

the script 

One of the cool things about Windows PowerShell is there are multiple ways of doing the same thing. I have illustrated two methods of removing the last character of a string. For comparison’s sake, both techniques and associated output are shown in the following figure.

Image of both techniques and associated output

 

DM, that is all there is to removing the last letter of a string. I invite you to join me tomorrow for more exciting Windows PowerShell goodness.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Format Multilevel Arrays in PowerShell

$
0
0

Summary: Microsoft Scripting Guy Ed Wilson talks about formatting multilevel arrays in Windows PowerShell.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am having a problem with arrays. I have two-level arrays and they work perfectly when I have the arrays defined on a single line. But when I try to format my script so that it is easier to read, they seem to get messed up. Can you help me? 

—DF

 

Hey, Scripting Guy! AnswerHello DF,

Microsoft Scripting Guy Ed Wilson here. Knowing how to work with arrays is fundamental to nearly all programing languages—at least the ones I know about. In Windows PowerShell, we have removed much of the mystery surrounding arrays and basically hidden the complexity. It is common to use @() to create an array, as shown here:

$b = @(1,2,3)

But using @() is not required. For example, I can easily create an array by assigning more than one item to a variable. This technique is shown here:

$a = 1,2,3

To display the contents of a variable that contains an array, call that variable. To access a specific element in an array, use square brackets to reference the element. These techniques are shown here:

PS C:\> $a

1

2

3

PS C:\> $a[0]

1

PS C:\> $a[2]

3

If I want to add an extra element to an array, I use the += operator (think of it as adding an item, I want the variable containing the array, and then I make the variable containing the array equal to the new value). The following code illustrates this technique:

PS C:\> $a += 6

PS C:\> $a

1

2

3

6

All of these techniques are illustrated in the following figure.

Image of techniques illustrated

I can easily create an array that contains an additional array inside one of the elements. For instance, if I want to store the array that is contained in the variable $a along with another array in a variable $b, I can use the following technique:

$a = 1,2,3

$b = $a,@(11,12,13)

If I look at what is stored in $b, it is not at first obvious that it comprises two different arrays. This is shown here:

PS C:\> $b

1

2

3

11

12

13

However, I can use square bracket notation and view the array stored in element 1 of the $b variable:

PS C:\> $b[1]

11

12

13

I can also access each element of the array:

PS C:\> $b[1][0]

11

Storing arrays inside arrays in Windows PowerShell is both powerful and very easy to do. When writing a script, however, formatting this can become a problem. The following is one long command to store multiple arrays in the various elements of an array.

$a = @(0,1,2,3),@(10,11,12,13),@(20,21,22,23),@(30,31,32,33),@(40,41,42,43),@(50,51,52,53)

As shown in the following figure, the multiple dimensions of the array are accessed via square brackets.

Image of multiple dimensions of array accessed via square brackets

DF, in the code you sent to me, you attempt to create another array around the various elements of arrays you have. But when the code runs, it does not work. Here is the code you sent:

$a = @(@(0,1,2,3)

     ,@(10,11,12,13)

     ,@(20,21,22,23)

     ,@(30,31,32,33)

     ,@(40,41,42,43)

     ,@(50,51,52,53))

When the code runs and I attempt to index into the various elements of the array, the results are munged.

Image of results munged when attempting to index into array elements 

It is not necessary to surround the array with another @(). The change is rather simple. Just move the commas to the right, and it will work. This is shown in the revised code here:

$a = @(0,1,2,3),

     @(10,11,12,13),

     @(20,21,22,23),

     @(30,31,32,33),

     @(40,41,42,43),

     @(50,51,52,53)

When I run the code, I am able to index into the arrays as shown in the following figure.

Image of successfully indexing into arrays

When dealing with the necessity of attempting to store multiple arrays into a single array, one pretty good approach is to store those arrays into variables, and then use the variables to build up the new array. This approach is shown here

$b = 0,1,2,3

$c = 10,11,12,13

$d = 20,21,22,23

$e = 30,31,32,33

$f = 40,41,42,43

$g = 50,51,52,53

$a =  $b,$c,$d,$e,$f,$g

When the script runs, the output shown in the following figure is displayed.

Image of output displayed when script is run

 

Well, DF, that is about all there is to working with and formatting multilevel arrays.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Use PowerShell to Work with CSV Formatted Text

$
0
0

Summary: See how to use Windows PowerShell to create CSV files from formatted and unformatted text.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have begun to play around with Windows PowerShell, and it is absolutely the most confusing thing Microsoft has ever created. Simple things are easy. I can use Get-Process and Get-Service with no problem, but the moment I begin to think I can use this tool, I get kicked in the teeth. A case in point is the Export-CSV cmdlet. When I first saw this, I thought, “Well, now, this is cool!” But I have yet to see how cool it really is. The thing is nearly impossible to use. I just don’t get it. All I want to do is save data in a CSV file, so I can open it up in Microsoft Excel. Is the cmdlet broken?

—BB

 

Hey, Scripting Guy! AnswerHello BB,

Microsoft Scripting Guy Ed Wilson here. I can certainly sympathize with you. I get this question quite a bit, unfortunately. Part of the problem is that the cmdlet does not really do what you think it will. For example, if I have a string with a CSV listing, and I write it to a CSV file by using the Export-CSV cmdlet, I might use code that looks like the following:

$Outputstring = "dog","Cat","Mouse"

$OutputString | Export-Csv C:\fso\csvTest.csv

However, when I look at the csvtest.csv file, the results are disappointing. The file created by the preceding code is shown in the following figure.

Image of file created by preceding code

The first time I saw this, I could not believe my eyes. I actually deleted the file and ran the command a second time to make sure of the results. To my chagrin, the second file appeared as the first. Neither was a CSV file.

There are two Windows PowerShell cmdlets that work with comma-separated values: ConvertTo-CSV and Export-CSV. The two cmdlets are basically the same; the difference is that Export-CSV will save to a text file, and ConvertTo-CSV does not. The cmdlets are useful for working with deserialized objects. For example, if I want to be able to analyze process information at a later date, I can use the Get-Process cmdlet to store the objects in a text file. I can then use Import-CSV to reconstitute the process objects. This is shown here:

PS C:\> Get-Process winword | Export-Csv c:\fso\procWord.csv

PS C:\> $a = Import-Csv C:\fso\procWord.csv

PS C:\> $a.Name

WINWORD

The complete text of the procWord.csv file is shown in the following figure.

Image of complete text of procWord.csv file

As shown in the preceding figure, the CSV file created by Export-CSV consists of three parts. The first is the type of object stored in the file. The second is the column headings, and the third contains the property values. If more than one object were stored in the file, the remaining lines would contain additional property values. If a property did not exist on the object, the file would be padded by commas. When the object is reconstituted via the Import-CSV cmdlet, all the properties stored in the file—but none of the methods—return to the object. A reconstituted object is devoid of any methods.

If I want to save process information as a CSV file because I am planning to open the file in Excel, I use the NoTypeInformation switched parameter of the Export-CSV cmdlet. This technique is shown here (GPS is an alias for the Get-Process cmdlet):

GPS winword,Excel,Outlook | Export-Csv c:\fso\procoff.csv –NoTypeInformation

When I open the CSV file in Microsoft Excel, each object appears on its own line. The properties are in the first line as column headers. This is shown in the following figure.

Image of Excel file with each object on its own line

BB, as shown so far, the Export-CSV cmdlet is great at taking objects and storing them in an offline format so that they can be reconstituted for later analysis and comparison. In addition, I can use the Export-CSV cmdlet to save objects and then view the properties in Microsoft Excel. If I do not want all of the properties, I can create a custom object by piping to the Select-Object cmdlet first. In the following command, I use gps (the alias for Get-Process) to return information about each process on the machine. I then choose only three properties from the objects: id, processName, and CPU. This information is exported into a CSV file. This technique is shown here:

gps | Select-Object id, processName, CPU | Export-Csv c:\fso\co.csv –NoTypeInformation

The saved data is shown in the following figure when viewed in Microsoft Excel.

Image of saved data viewed in Excel

If you want to pipe your array of strings to the Export-CSV cmdlet, you will need to first convert them into objects. This is because you need a custom object with multiple properties, instead of a series of single property strings. This is the problem you were wrestling with earlier—you were not providing the Export-CSV cmdlet with a nice object upon which to work.

Export-CSV treats each object as a new row of data. The columns used with the CSV file are determined by the properties of the object. To work with Export-CSV, it is necessary to create an object for each row of data to be stored. This technique is shown here:

$Outputstring = "dog","Cat","Mouse"

$psObject = $null

$psObject = New-Object psobject

foreach($o in $outputString)

{

 Add-Member -InputObject $psobject -MemberType noteproperty `

    -Name $o -Value $o

}

$psObject | Export-Csv c:\fso\psobject.csv -NoTypeInformation

The resulting CSV file is shown in the following figure.

Image of resulting CSV file

Most of the time, if I need to create a CSV file from unformatted text, I tend to use manual string techniques, as shown here:

$Outputstring = "dog","Cat","Mouse"

$Outputstring -join "," >> c:\fso\joinCSV.csv

The output from this approach is shown in the following figure.

Image of output from this approach

 

BB, those are several ways of working with CSV data and the Export-CSV cmdlet.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Send Email from Exchange Online by Using PowerShell

$
0
0

Summary: Guest blogger Mike Pfeiffer shows how to send email messages using Windows PowerShell and Exchange online.

 

Microsoft Scripting Guy Ed Wilson here. Guest Blogger Mike Pfeiffer recently published a book called Microsoft Exchange 2010 PowerShell Cookbook. Mike has been in the IT field for over 13 years, spending most of his time as an enterprise consultant focused on Active Directory and Exchange implementation and migration projects. He is a Microsoft Certified Master on Exchange 2010, and a Microsoft Exchange MVP. You can find his writings online at mikepfeiffer.net, where he blogs regularly about Exchange Server and Windows PowerShell-related topics. 

 

Sending the output of a script in an email message is simple with Windows PowerShell 2.0, thanks to the Send-MailMessage cmdlet. I have seen some great solutions people have created with this cmdlet to generate and deliver automated reports, notifications, and monitoring alerts. Even before Windows PowerShell 2.0, it was easy to use the classes in the System.Net.Mail namespace to accomplish the same goal. In either case, as long as you have access to an SMTP server, you can easily automate the transmission of email messages with Windows PowerShell.

Messaging is getting a little more complicated these days, though. Now we have hosted solutions such as Exchange Online offered through Office 365. Imagine that your organization has decided to go with a fully hosted deployment of Exchange Online, meaning that all of your organization’s mailboxes are hosted exclusively in the cloud. What do you do with your scripts that need to send email messages?

Out of the box, SMTP Relay with Exchange Online requires a Transport Layer Security (TLS) connection, and you must connect on SMTP port 587. The good news is that there is an easier way to send email messages via Exchange Online using Windows PowerShell. The answer is the Exchange Web Services (EWS) Managed API, which is a fully object-oriented .NET Framework wrapper for the EWS XML protocol.

Here are a few of the benefits to using EWS to send email messages:

  • Messages are sent through the web services endpoint on port 443, which is firewall friendly.
  • The API has a built-in autodiscover client that will determine the web service endpoint for you automatically. You don’t need to provide a server name.
  • You don’t need to worry about connector settings, mail relay, and TLS.
  • Messages sent through the API can be saved in the senders Sent Items folder, which can provide tracking and reporting information.


In order to use the EWS Managed API and the code provided in this article, you’ll need a machine running Windows Server 2008 R2, Windows Server 2008, Windows 7, or Windows Vista. You’ll also need Windows PowerShell 2.0 with.NET Framework 3.5 installed. As long as these requirements are met, head over to the Microsoft Download Center and grab EWS Managed API 1.1. You will want to download the appropriate MSI package for your machine, either x86 or x64, and run through the installation. The installer simply extracts the EWS assembly to a folder on your hard drive, which by default will be under C:\Program Files\Microsoft\Exchange\Web Services\1.1

After this is complete, we are ready to write some code. Before we can start working with the classes in the EWS Managed API, the assembly must be loaded so that the .NET Framework types are available:

Add-Type -Path 'C:\Program Files\Microsoft\Exchange\Web Services\1.1\Microsoft.Exchange.WebServices.dll'

Next, we need to create an instance of the ExchangeService class that can be used to send SOAP messages to an Exchange server using the API. This class basically defines the connection information for the web service. It provides several properties and methods, some of which can be used to specify our credentials and set the web services endpoint URL using the built-in autodiscover client:

$service = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService -ArgumentList Exchange2010_SP1

Notice that we are passing the Exchange version to the ExchangeService class constructor. This is actually optional in this case because the 1.1 version of the API will automatically set this to Exchange2010_SP1, which is the same version running in the cloud. However, it's good to know if you ever want to write code that targets an Exchange server running on premise. The ExchangeVersion Enumeration contains all of the supported versions that can be used.

Since our goal is to send a message from Exchange Online, we will need to authenticate to the web service. This can be set on the existing $service objects Credentials property:

$service.Credentials = New-Object Microsoft.Exchange.WebServices.Data.WebCredentials -ArgumentList  user@yourdomain.onmicrosoft.com, “P@ssw0rd”

The credentials used here should be the mailbox from which you want to send the email message. This will be a valid user name and password for an existing Exchange Online user account.

At this point, we can use the AutoDiscoverUrl method of the $service object to automatically set the EWS end-point:

$service.AutodiscoverUrl('user@yourdomain.onmicrosoft.com', {$true})

The first argument provided for the AutoDiscoverUrl method is the e-mail address for your Exchange Online user account. The API will take the domain portion of the address and query DNS for an autodiscover record. Once it is resolved, the API will hit the autodiscover endpoint and determine the Exchange web services URL in the cloud. The second argument passed to the method is a scriptblock that returns $true. This allows the server to redirect API to the appropriate Exchange Online server during the autodiscover process.

If you take a look at the $service object after invoking the AutodiscoverUrl method, you’ll notice that the EWS URL is automatically set to an Exchange Online server running in the cloud.

Image of EWS URL automatically set to Exchange Online server running in cloud

Now that we have an authenticated connection established to Exchange Online, we can create an instance of the EmailMessage class and send a message:

$message = New-Object Microsoft.Exchange.WebServices.Data.EmailMessage -ArgumentList $service
$message.Subject = 'Test is a test'
$message.Body = 'This message is being sent through EWS with PowerShell'
$message.ToRecipients.Add(‘sysadmin@contoso.com’)
$message.SendAndSaveCopy()

Setting the Subject, Body, and From properties of an EmailMessage object is pretty straightforward.  Adding recipients requires that we use the Add method of the ToRecipients property. When adding multiple recipients, you can call this method for each one. There are also properties for CcRecipients and BccRecipients.

When it comes to actually sending the message, there are two methods that can be used. The SendAndSaveCopy method sends the message and retains a copy in the senders Sent Items folder. Alternatively, the Send method will send the message without saving a copy. Because we passed the $service object to the EmailMessage class constructor when creating the message, our Exchange Online credentials and web services endpoint located through autodiscover will be used to transmit the message.

While the code we have looked at so far is useful, we can automate things even further. Let us wrap the code up into a reusable function to make this a little easier.

function Send-O365MailMessage {

    [CmdletBinding()]

    param(

      [Parameter(Position=1, Mandatory=$true)]

      [String[]]

      $To,

                       

      [Parameter(Position=2, Mandatory=$false)]

      [String[]]

      $CcRecipients,

                       

      [Parameter(Position=3, Mandatory=$false)]

      [String[]]

      $BccRecipients,                    

           

      [Parameter(Position=4, Mandatory=$true)]

      [String]

      $Subject,

                       

      [Parameter(Position=5, Mandatory=$true)]

      [String]

      $Body,

                       

      [Parameter(Position=6, Mandatory=$false)]

      [Switch]

      $BodyAsHtml,                     

                       

      [Parameter(Position=7, Mandatory=$true)]

      [System.Management.Automation.PSCredential]

      $Credential

      )

           

    begin {

      #Load the EWS Managed API Assembly

      Add-Type -Path 'C:\Program Files\Microsoft\Exchange\Web Services\1.1\Microsoft.Exchange.WebServices.dll'

    }

   

    process {

      #Insatiate the EWS service object                  

      $service = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService -ArgumentList Exchange2010_SP1

 

      #Set the credentials for Exchange Online

      $service.Credentials = New-Object Microsoft.Exchange.WebServices.Data.WebCredentials -ArgumentList `

      $Credential.UserName, $Credential.GetNetworkCredential().Password

 

      #Determine the EWS endpoint using autodiscover

      $service.AutodiscoverUrl($Credential.UserName, {$true})

                       

      #Create the email message and set the Subject and Body

      $message = New-Object Microsoft.Exchange.WebServices.Data.EmailMessage -ArgumentList $service

      $message.Subject = $Subject

      $message.Body = $Body

 

      #If the -BodyAsHtml parameter is not used, send the message as plain text                  

      if(!$BodyAsHtml) {

        $message.Body.BodyType = 'Text'

      }                

 

      #Add each specified recipient

       $To | ForEach-Object{

         $null = $message.ToRecipients.Add($_)

      }

 

      #Add each specified carbon copy recipient               

       if($CcRecipients) {

         $CcRecipients | ForEach-Object{

           $null = $message.CcRecipients.Add($_)

           }

       }

                         

       #Add each specified blind copy recipient

       if($BccRecipients) {

         $BccRecipients | ForEach-Object{

           $null = $message.BccRecipients.Add($_)

           }

       }

           

       #Send the message and save a copy in the Sent Items folder

         $message.SendAndSaveCopy()

    }

}

 

This function provides several parameters used to set the subject, body, and recipients for the message. The To, CcRecipients, and BccRecipients parameters accept multiple addresses and you can specify one or more addresses for each recipient type when sending a message. I’ve also included a BodyAsHtml switch parameter, which can be used to send the message in HTML format—or in plain text, when the parameter is not used. 

After you have added this function to your shell session, you can call it just like a cmdlet and send a message through the web service:

$creds = Get-Credential

Send-O365MailMessage -To user@domain.com -Subject 'test' -Body 'this is a test' -BodyAsHtml -Credential $creds

This code first uses the Get-Credential cmdlet to store your Exchange Online credentials. We then call the Send-O365MailMessage cmdlet to send an email message through Exchange Online.

As a side note, EWS works the same on premise as it does in the cloud. If you have an existing Exchange 2010 SP1 deployment onsite, you can use the code samples in this article to send messages through the web service on your on-premise servers.

 

Thank you, Mike, for sharing your knowledge and experience with us today.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 


Configure PowerShell Cmdlet Default Values by Using Splatting

$
0
0

Summary: Learn how to use the Windows PowerShell splatting feature to avoid typing repetitive cmdlet parameters.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am wondering if you can help me. There are some cmdlets that have a lot of parameters. I need to fill these parameters out in order to cause the cmdlet to work properly, but once I have figured out the parameters, I would like to keep those parameters filled out, and not have to reenter them every time I want to use the cmdlet. For example, I like to configure the ping to be a certain size and generate a specific number of pings, but I hate to reenter all that data every time I want to ping a different machine. Is there something that can be done? I have thought about creating a custom function on the fly, but that seems like a lot of extra work.

—CC

 

Hey, Scripting Guy! AnswerHello CC,

Microsoft Scripting Guy, Ed Wilson, is here. Yeah, I know that it is the weekend, but it is the Scripting Wife’s birthday and we are heading out today. I thought I would answer a question really quick before we leave. CC, I know what you are talking about, and I like to use the up arrow and edit the previous command when doing repetitive operations. It works out really great, if I can remember to put the thing I need to change at the end. This is shown here:

Test-Connection -Count 1 -BufferSize 15 -Delay 1 -ComputerName localhost

Test-Connection -Count 1 -BufferSize 15 -Delay 1 -ComputerName loopback

The use of these commands and the associated results appear in the following figure.

Image of commands and associated results

This works okay, but I often forget to type the commands in exactly the right order, so I have to waste time editing the command line. Because of my font size, this also means the command wraps the line, and I have to scroll back and forth as I attempt to change the computer name. It is not a great solution.

Certainly, I can create a custom function, and make my desired parameters default values, but that is a bit of extra work. Unless I store the custom function in my profile or startup module, I will have to search for my function. Generally, I would just go ahead and fight it out at the command line before I could search to find my custom function.

A better approach is to use splatting. In splatting, I create a hash table of parameters and values, and then I supply that hash table to the cmdlet. The Windows PowerShell cmdlet is smart enough to look inside the hash table for the values to its parameters.

$pingConfig = @{

    "count" = 1

    "bufferSize" = 15

    "delay" = 1 }

Test-Connection localhost @pingConfig

When typing at the Windows PowerShell console prompt, I can put the entire hash table on a single line by separating the key/value pairs with a semicolon, as shown here:

$pingConfig = @{"count" = 1;"bufferSize" = 15;"delay" = 1}

The command and associated output are shown in the following figure.

Image of command and associated output

The nice thing about using splatting is that the order is not fixed. It is possible to put the hash table into the first to permit ease of changing the target computer. This is shown in the following figure.

Image of changing target computer 

CC, that is it. Using splatting, it is easy to configure your Windows PowerShell cmdlets to behave the way you want them to. Keep in mind that the keys for the hash table need to match the parameter names. The Get-Help cmdlet is invaluable for showing the parameters and detailing their use.

 

The Scripting Wife is ready to go, so I must run. Talk to you tomorrow when I begin a brand new week on the TechNet Script Center.  

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Use PowerShell and WMI to Get Processor Information

$
0
0

Summary: Learn how to get the number of processor cores via WMI and Windows PowerShell.

Hey, Scripting Guy! QuestionHey, Scripting Guy! I need to perform an audit of computers on our network. Specifically, I am tasked with obtaining CPU information. I need the processor speed, number of cores, and number of logical processors. I feel like I should be able to use Windows PowerShell to do this, but I am not certain. Can you help?

—RS

 

Hey, Scripting Guy! AnswerHello RS,

Microsoft Scripting Guy Ed Wilson here. This has been a rather crazy time. This week I am in Seattle, Washington, talking to customers about Windows PowerShell. Later in the week, I will be talking to Windows PowerShell writers on campus at our Microsoft Office in Redmond. I fly back to Charlotte, and then I head north to Canada for a couple of weeks. I really enjoy the opportunity to meet with people who are using Windows PowerShell to solve real world problems. It is cool.

RS, to find out information about the CPU, I use the Windows Management Instrumentation (WMI) class Win32_Processor. In Windows PowerShell, a single line of code that uses the Get-WmiObject cmdlet to do the heavy lifting is all that is required. The syntax of a command to query WMI and return CPU information is shown here:

Get-WmiObject Win32_Processor

And I can shorten that command by using the gwmi alias:

gwmi win32_Processor

In the following figure, I illustrate using the Get-WmiObject command and the default output from the command.

Image of using Get-WmiObject and default output

The Win32_Processor WMI class is documented on MSDN, and the article describes what all of the properties and coded values mean. But RS, for your requirements, I do not need that article. What I do need is a good way to select only the information you require. To do this, I am going to choose which properties I need. I then pipe the returned object to the Select-Object cmdlet. The reason for this is to remove the system properties that are automatically included with the returned WMI object. To avoid typing the properties twice (once for the Get-WmiObject cmdlet and once for the Select-Object cmdlet), I store the array of properties in the $property variable. The revised command is shown here:

$property = "systemname","maxclockspeed","addressWidth",

            "numberOfCores", "NumberOfLogicalProcessors"

Get-WmiObject -class win32_processor -Property  $property |

Select-Object -Property $property 

RS, you mentioned wanting to query computers on your network. The easy way to do this is to use the Active Directory cmdlets. I have an entire series of articles that talk about how to get the Active Directory cmdlets, and how to load and use them. You should refer to that series if you have questions about using Active Directory cmdlets.

RS, I wrote a script called GetAdComputersAndWMIinfo.ps1. The complete text of this script appears here.

GetAdComputersAndWMIinfo.ps1

Import-Module ActiveDirectory

$pingConfig = @{

    "count" = 1

    "bufferSize" = 15

    "delay" = 1

    "EA" = 0 }

$computer = $cn = $null

$cred = Get-Credential

 Get-ADComputer -filter * -Credential $cred |

 ForEach-Object {

                 if(Test-Connection -ComputerName $_.dnshostname @pingconfig)

                   { $computer += $_.dnshostname + "`r`n"} }

$computer = $computer -split "`r`n"

$property = "systemname","maxclockspeed","addressWidth",

            "numberOfCores", "NumberOfLogicalProcessors"

foreach($cn in $computer)

{

 if($cn -match $env:COMPUTERNAME)

   {

   Get-WmiObject -class win32_processor -Property  $property |

   Select-Object -Property $property }

 elseif($cn.Length -gt 0)

  {

   Get-WmiObject -class win32_processor -Property $property -cn $cn -cred $cred |

   Select-Object -Property $property } } 

The first thing to do is to import the ActiveDirectory module. In a script, I recommend using the complete name for the ActiveDirectory module, instead of using a wildcard character pattern such as *AD*. This is because there are many modules available for download from the Internet that would match the *AD* pattern. If this is the case, you cannot be certain you have actually loaded the ActiveDirectory module. To load the ActiveDirectory module, use the Import-Module cmdlet as shown here:

Import-Module ActiveDirectory

Next, I intend to use splatting to simplify using the Test-Connection cmdlet. I wrote an article about splatting last week. Splatting uses a hash table for the parameters and associated values. This hash table is shown here:

$pingConfig = @{

    "count" = 1

    "bufferSize" = 15

    "delay" = 1

    "EA" = 0 }

I then initialize a couple of variables. This helps when running the command multiple times inside the Windows PowerShell ISE. I also retrieve credentials via the Get-Credential cmdlet. These two commands are shown here:

$computer = $cn = $null

$cred = Get-Credential

Now, I use the Get-ADComputer cmdlet to retrieve a listing of computers from Active Directory Directory Services. I use the Foreach-Object cmdlet and pass the host names to the Test-Connection cmdlet to ensure the computer is online. I then create an array of computernames and store the names in the $computer variable. This is shown here:

Get-ADComputer -filter * -Credential $cred |

 ForEach-Object {

                 if(Test-Connection -ComputerName $_.dnshostname @pingconfig)

                   { $computer += $_.dnshostname + "`r`n"} }

The array that gets created is an array of single letters. I split the string based on the carriage return and line feed characters “`r`n” and create a new array that contains the name of each computer in an array element. This process leaves an element at the end of the array; this empty element will be dealt with later in the script. Here is the code that creates the new array of ComputerNames:

$computer = $computer -split "`r`n"

I now define an array of property names that are to be collected from WMI. This is a straightforward value assignment:

$property = "systemname","maxclockspeed","addressWidth",

            "numberOfCores", "NumberOfLogicalProcessors"

The online computers are stored in the $computer variable. I use the foreach statement to walk through the array of computer names. If the computer name matches the local computer name, I do not use credentials because WMI would cause the command to fail. In addition, I check to see if the computer name is greater than 0 in length. This takes care of the empty element at the end of the array. This portion of the code is shown here:

foreach($cn in $computer)

{

 if($cn -match $env:COMPUTERNAME)

   {

   Get-WmiObject -class win32_processor -Property  $property |

   Select-Object -Property $property }

 elseif($cn.Length -gt 0)

  {

   Get-WmiObject -class win32_processor -Property $property -cn $cn -cred $cred |

   Select-Object -Property $property } }

When the script runs, output similar to that shown in the following figure is displayed.

 Image of output when script runs

RS, that is all there is to using the Active Directory module to retrieve computer names, and to use WMI to query for the processor information.

 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Create a Custom Object from WMI by Using PowerShell

$
0
0

Summary: Create a custom object from WMI to display processor and operating system information using Windows PowerShell.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! Your script yesterday was pretty cool. However, in addition to obtaining information about the processor, I also need to know operating system is 32-bit or 64-bit, the version of Windows that is installed, and the service pack that is installed. I hate to be picky, but this is the information I need in order to plan for our upgrade project.

—BB

 

Hey, Scripting Guy! AnswerHello BB,

Microsoft Scripting Guy Ed Wilson here. First, I need to apologize to the Scripting Wife. Yesterday was her birthday, and I did not mention it in the Hey, Scripting Guy! Blog. Sorry!

Beyond the disappointed Scripting Wife, BB, I am glad you enjoyed yesterday’s article. It is one of those things that sort of gets out of control. I begin working on a script and keep adding to it. So I will simply continue adding to it for today.

First, make sure you have the script from yesterday’s Hey Scripting Guy! Article. The script, as it stood at the end of yesterday, is shown here:

Import-Module ActiveDirectory

$pingConfig = @{

    "count" = 1

    "bufferSize" = 15

    "delay" = 1

    "EA" = 0 }

$computer = $cn = $null

$cred = Get-Credential

 Get-ADComputer -filter * -Credential $cred |

 ForEach-Object {

                 if(Test-Connection -ComputerName $_.dnshostname @pingconfig)

                   { $computer += $_.dnshostname + "`r`n"} }

$computer = $computer -split "`r`n"

$property = "systemname","maxclockspeed","addressWidth",

            "numberOfCores", "NumberOfLogicalProcessors"

foreach($cn in $computer)

{

 if($cn -match $env:COMPUTERNAME)

   {

   Get-WmiObject -class win32_processor -Property  $property |

   Select-Object -Property $property }

 elseif($cn.Length -gt 0)

  {

   Get-WmiObject -class win32_processor -Property $property -cn $cn -cred $cred |

   Select-Object -Property $property } }

 

Now, let’s see if I can find the additional information you need for your upgrade project. Actually, it is pretty simple because everything you need is in the Win32_OperatingSystem WMI class. The Win32_OperatingSystem WMI class is documented on MSDN, but the properties needed for today’s script are rather straightforward and do not need extensive documentation. A quick check of the Win32_Operatingsystem WMI class reveals everything I need. The command I use to perform this check is shown here (gwmi is an alias for Get-WmiObject and fl is alias for Format-List):

gwmi win32_operatingsystem | fl *

The command and associated output are shown in the following figure.

Image of command and associated output

The revised script is named GetADComputersAndWMiProcessorAndOSInfo.ps1.

GetADComputersAndWMiProcessorAndOSInfo

Import-Module ActiveDirectory

$pingConfig = @{

    "count" = 1

    "bufferSize" = 15

    "delay" = 1

    "EA" = 0 }

$computer = $cn = $null

$cred = Get-Credential

 Get-ADComputer -filter * -Credential $cred |

 ForEach-Object {

                 if(Test-Connection -ComputerName $_.dnshostname @pingconfig)

                   { $computer += $_.dnshostname + "`r`n"} }

$computer = $computer -split "`r`n"

$property = "systemname","maxclockspeed","addressWidth",

            "numberOfCores", "NumberOfLogicalProcessors"

$osProperty = "Caption", "OSArchitecture","ServicePackMajorVersion"

foreach($cn in $computer)

{

 if($cn -match $env:COMPUTERNAME)

   {

   $obj = Get-WmiObject -class win32_processor -Property  $property |

          Select-Object -Property $property

   $os =  Get-WmiObject -class win32_OperatingSystem -Property  $osproperty |

          Select-Object -Property $osproperty

   } #end if     

 elseif($cn.Length -gt 0)

  {

   $obj = Get-WmiObject -class win32_processor -Property $property -cn $cn -cred $cred |

   Select-Object -Property $property

   $os = Get-WmiObject -class win32_OperatingSystem -Property $osproperty -cn $cn -cred $cred |

   Select-Object -Property $osproperty

  } #end elseif

  New-Object psobject -Property @{

   "name" = $obj.systemname

   "speed" = $obj.maxclockspeed

   "addressWidth" = $obj.addressWidth

   "numberCores" = $obj.numberOfCores

   "numberLogicalProcessors" = $obj.NumberOfLogicalProcessors

   "OSname" = $os.Caption

   "OSArchitecture" = $os.OSArchitecture

   "ServicePack" = $os.ServicePackMajorVersion

   }

  #$os

 } #END FOREACH

When the script runs, the first thing that happens is a credential dialog box displays. This is because I request credentials for connection to remote systems. The credentials are not used to connect to the local computer. The credential dialog box is shown in the following figure.

Image of credential dialog box

I added an array of properties to store the operating system information:

$osProperty = "Caption", "OSArchitecture","ServicePackMajorVersion"

This change is shown in the following figure.

Image of the change

In addition, I changed the portion of the script inside the Foreach loop so that I am storing the Processor object instead of directly emitting. I also added a WMI query to retrieve the operating system information. The revised section is shown here:

foreach($cn in $computer)

{

 if($cn -match $env:COMPUTERNAME)

   {

   $obj = Get-WmiObject -class win32_processor -Property  $property |

          Select-Object -Property $property

   $os =  Get-WmiObject -class win32_OperatingSystem -Property  $osproperty |

          Select-Object -Property $osproperty

   } #end if     

 elseif($cn.Length -gt 0)

  {

   $obj = Get-WmiObject -class win32_processor -Property $property -cn $cn -cred $cred |

   Select-Object -Property $property

   $os = Get-WmiObject -class win32_OperatingSystem -Property $osproperty -cn $cn -cred $cred |

   Select-Object -Property $osproperty

  } #end elseif

This portion of the revised code is shown in the following figure.

Image of this portion of revised code

To make it easier to work with the output, I decided to return a custom object. This object contains information from both WMI classes. In addition, I changed some of the column headings to make it easier to read. This section of the script that creates the new object is shown here:

New-Object psobject -Property @{

   "name" = $obj.systemname

   "speed" = $obj.maxclockspeed

   "addressWidth" = $obj.addressWidth

   "numberCores" = $obj.numberOfCores

   "numberLogicalProcessors" = $obj.NumberOfLogicalProcessors

   "OSname" = $os.Caption

   "OSArchitecture" = $os.OSArchitecture

   "ServicePack" = $os.ServicePackMajorVersion

   } #end new object

This portion of the new code is shown in the following figure.

Image of this portion of new code

When the script runs, the following output is displayed.

Image of output displayed when script is run

 

BB, that is all there is to using Windows PowerShell to query for both processor and operating system information. 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Simplify Creating Items with PowerShell Providers

$
0
0

Summary: Learn how to use the same syntax to create different types of items via Windows PowerShell providers.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am wondering about the Windows PowerShell idea of providers. I have seen you mention them before, but not recently. I am not even sure that the Scripting Wife has talked about Windows PowerShell providers. Is this something that is not very important? Have you just overlooked something? What is the deal?

—GB
 

Hey, Scripting Guy! AnswerHello GB,

Microsoft Scripting Guy Ed Wilson here. This week I am onsite with customers talking to them about Windows PowerShell. In fact, I had a number of questions about providers today, so the ideas are fresh in my mind. I cannot check to see how many articles I have actually mentioned Windows PowerShell providers, because the search is not working very well on the Hey, Scripting Guy! Blog right now. We are in the process of transitioning to a new mechanism, and it is busy rebuilding new indexes.

The topic of providers has not been overlooked, but maybe just not highlighted too much. For example, in getting ready for the 2010 Scripting Games, the Scripting Wife had a lesson using the registry provider.

The cool thing about Windows PowerShell providers is they provide a single way to access different types of data. For example, the provider cmdlets all have the word Item in the noun (either in part or completely). These cmdlets are shown here:

PS C:\> Get-Command -Noun *item* | Select-Object name

 

Name

Clear-Item

Clear-ItemProperty

Copy-Item

Copy-ItemProperty

Get-ChildItem

Get-Item

Get-ItemProperty

Invoke-Item

Move-Item

Move-ItemProperty

New-Item

New-ItemProperty

Remove-Item

Remove-ItemProperty

Rename-Item

Rename-ItemProperty

Set-Item

Set-ItemProperty

The reason they use the word Item is because they can work against different data sources, so they could be one of many different types of items. For example, if I am on my C:\ drive, and I use the New-Item cmdlet, I can create either a new file or folder. This command will use the filesystem provider. I must tell the provider what I want to create: either a file or folder. This is shown here:

PS C:\> New-Item -Name example1 -Path c: -ItemType directory

    Directory: Microsoft.PowerShell.Core\FileSystem::C:\

Mode               LastWriteTime                            Length Name

d----                 9/27/2011  8:51 PM                    <DIR> example1 

 

PS C:\> New-Item -Name example.txt -Path C:\example1 -ItemType file

    Directory: Microsoft.PowerShell.Core\FileSystem::C:\example1

Mode               LastWriteTime                Length Name

-a---                 9/27/2011  8:52 PM        0 example.txt

 

If I change to the variable drive and use the New-Item cmdlet, I will create a new variable. Only one type of item exists on a variable drive (a variable), so there is no need to use the itemtype parameter with the command. This technique is shown here where I use the New-Item cmdlet to create a new variable. I then call the variable to illustrate that it is created, and that it contains the string value “example variable.”

PS C:\> Set-Location variable:

PS Variable:\> New-Item -Name example -Value "example variable"

 

Name                           Value

Example                        example variable

 

PS Variable:\> $example

example variable

PS Variable:\>

 

When I change my working drive to the env (environmental variable drive) drive, the exact same command I used to create an example variable creates an example environmental variable. In the following code, I first change my working drive to the env: drive. Next, I create an environmental variable named example, and I assign the string value “example variable” to this environmental variable. I then retrieve the value of the new environmental variable by accessing it from the $env: drive. This code is shown here:

PS Variable:\> Set-Location env:

PS Env:\> New-Item -Name example -Value "example variable"

 

Name               Value

Example            example variable 

 

PS Env:\> $env:example

example variable

PS Env:\>

Once again, I use the Set-Location cmdlet to change to a new drive. This time, it is the alias: drive. I again use the New-Item cmdlet in exactly the same way I used it earlier: to create a new alias for the Get-PSDrive cmdlet. To create the new alias, I give it a name, psd, and I assign a value, Get-PSDrive. The command and associated output are shown here:

PS Env:\> Set-Location alias:

PS Alias:\> New-Item -Name psd -Value Get-PSDrive

 

CommandType              Name               Definition

Alias                              psd                   Get-PSDrive

To see if the new alias works, I type psd on the Windows PowerShell console. The output is shown here:

PS Alias:\> psd

 

Name               Used (GB)          Free (GB)           Provider                        Root

Alias                                                                  Alias

C                      99.85                47.66                FileSystem                     C:\

Cert                                                                  Certificate                      \

D                                                                      FileSystem                     D:\

E                                                                      FileSystem                     E:\

Env                                                                   Environment

Feed                                                                 FeedStore

Function                                                            Function

Gac                                                                   AssemblyCache              Gac

HKCU                                                                Registry                         HKEY_CURRENT_USER

HKLM                                                               Registry                         HKEY_LOCAL_MACHINE

Pscx                                                                  PscxSettings

Variable                                                                        Variable

WSMan                                                             WSMan

I can even use the function drive to create a new function. This is COOL! In the following example, I create a new function that returns information about the winword process (the process name used by Microsoft Word). Normally, I would need to write the function as is shown here:

Function get-word

{

 Get-process winword

}

The key items to creating the function are the function keyword, the name of the function (get-word), the script block {}, and the code itself: Get-process winword.

As shown in the code that follows, by using the function drive and the New-Item cmdlet, I leave off the braces that mark the script block and the function keyword:

PS Alias:\> sl function:

PS Function:\> New-Item -Name get-word -Value "get-process winword"

 

CommandType              Name                           Definition

Function                        get-word                       get-process winword

But does the new Get-Word function actually work? As shown here, it does work:

PS Function:\> get-word

 

Handles             NPM(K)             PM(K)   WS(K)   VM(M)  CPU(s)               Id         ProcessName

746                   74                     37872   87000   404       31.72                1280     WINWORD

If I pop over to the HKCU registry drive, I can even use the New-Item cmdlet to create a new registry key:

PS Function:\> Set-Location hkcu:

PS HKCU:\> New-Item -Path HKCU:\Software -Name example

    Hive: HKEY_CURRENT_USER\Software

SKC      VC        Name               Property

0          0          example            {}

 

So, GB, I hope this quick tour of the various Windows PowerShell drives that the Windows PowerShell providers create will inspire you to experiment with this powerful tool. The really revolutionary thing is using exactly the same command—and in any case the same syntax—to create a new alias, file, folder, variable, environmental variable, function, and registry key. Exploring the Windows PowerShell provider subsystem will pay great dividends.

 

Tomorrow, I will continue talking about Windows PowerShell providers.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Creating PowerShell Drives for Fun and Profit

$
0
0

Summary: Learn different ways to work with Windows PowerShell drives, including using WMI to find the root.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I have been trying to grasp this idea of Windows PowerShell drives. I am not certain how they are useful, or how I can find information about them. Can you help?

—BP

 

Hey, Scripting Guy! AnswerHello BP,

Microsoft Scripting Guy Ed Wilson here. Of course, I can help. I wrote about PSDrives three weeks ago, so you may want to refer to that article for additional information about PSDrives.

The first thing to understand about PSDrives is that PSDrives are used as a way to abstract the complexity of accessing different types of information. As I mentioned yesterday, Windows PowerShell providers are used to perform the abstraction; PSDrives provide the way to interact with that data. The cool thing about Windows PowerShell providers is they do not need to be written by the Windows PowerShell team; using the software development kit (SDK), anyone can write their own Windows PowerShell provider and expose data sources to the Windows PowerShell user. For example, someone could write a Windows PowerShell provider for an XML document. You could then use Set-Location to the XML document drive, use Get-ChildItem, and return information from that document. The methodology mimics the same methodology used with the file system.

In my article about PSDrives from three weeks ago, I illustrate creating a new PSDrive that is centered on a particular folder. This makes it easier for me to work with all of my Hey, Scripting Guy! Blog posts.

Oh! By the way, I am rapidly approaching a major milestone on the Hey, Scripting Guy! Blog. To date, the Hey Scripting Guy! Blog has 1,049 posts written about VBScript; I only wrote a few of those articles. Most of those were written by the previous Scripting Guys. One of the first things I did when becoming the Microsoft Scripting Guy was shift the blog’s emphasis to Windows PowerShell. The Hey, Scripting Guy! Blog now has 1,004 posts about Windows PowerShell. I did not write all of those articles, because we have had 184 blog articles written by guest bloggers, including posts written by Honorary Scripting Guys. So after I write another 230 blog posts about Windows PowerShell, the blog will officially be weighted in favor of Windows PowerShell.

I am not limited to creating new PSDrives from the filesystem provider. I can create new drives that expose data from the other providers as well. For example, if I am interested in working with the registry, I might want a new PSDrive. The first thing to do is to see which registry drives are available. One way to find this information is to use the Get-PSDrive cmdlet, as shown here:

PS C:\> Get-PSDrive -PSProvider registry

 

Name               Used (GB)          Free (GB)           Provider            Root

HKCU                                                                Registry             HKEY_CURRENT_USER

HKLM                                                               Registry             HKEY_LOCAL_MACHINE

The previous command reveals two registry drives: HKCU and HCLM. To create a new PSDrive, I use the New-PSDrive Windows PowerShell cmdlet. When using the New-PSDrive cmdlet, I need to specify the provider, the registry provider for this example, a name, and the root location for the drive. The command and associated output are shown here:

PS C:\> New-PSDrive -PSProvider registry -Name HKCR -Root HKEY_CLASSES_ROOT

 

Name               Used (GB)          Free (GB)           Provider            Rouot

HKCR                                                                Registry             HKEY_CLASSES_ROOT

To use the drive, I can use it like any other drive. If I do not put a colon at the end of the drive name, an error is displayed. This is illustrated here:

PS C:\> Get-ChildItem HKCR

Get-ChildItem : Cannot find path 'C:\HKCR' because it does not exist.

At line:1 char:14

+ Get-ChildItem <<<<  HKCR

    + CategoryInfo          : ObjectNotFound: (C:\HKCR:String) [Get-ChildItem], ItemNotFoundExcept

   ion

    + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand

When I add the colon at the end of the drive name, the command works as shown here:

PS C:\> Get-ChildItem HKCR:

    Hive: HKEY_CLASSES_ROOT

 

SKC      VC Name                      Property

3          16 *                              {ContentViewModeLayoutPatternForBrowse, ContentViewModeFo...

1          2 .386                           {(default), PerceivedType}

2          3 .3g2                           {(default), PerceivedType, Content Type}

2          3 .3gp                           {(default), PerceivedType, Content Type}

<<OUTPUT TRUNCATED>>

I do not have to create a PSDrive at a root location. I can specify a different location. For example, I might want a PSDrive that exposes the HKEY_LOCAL_MACHINE\SOFTWARE hive. To do this, I use the New-PSDrive cmdlet, specify the registry provider, give it a name, and identify the root of the drive. The resultant command is shown here:

PS C:\> New-PSDrive -PSProvider registry -Name sw -Root HKLM:\SOFTWARE

 

Name               Used (GB)          Free (GB)           Provider            Root

Sw                                                                    Registry             HKEY_LOCAL_MACHINE\SOFTWARE

I can then change my working location to the new software drive, and use the Get-ChildItem cmdlet to explore the drive. These two commands are shown here:

PS C:\> Set-Location sw:

PS sw:\> Get-ChildItem

 

 

    Hive: HKEY_LOCAL_MACHINE\SOFTWARE

 

                                                                       

SKC                  VC Name                                  Property

0                      1 7-Zip                                      {Path}

1                      0 Analog Devices                       {}

1                      0 ATI Technologies                     {}

756                   1 Classes                                   {(default)}

8                      0 Clients                                    {}

1                      1 CXT                                        {IsDriverLoaded}

1                      0 Hewlett-Packard                      {}

3                      0 IBM                                        {}

1                      0 IM Providers                           {}

2                      0 InstalledOptions                      {}

1                      0 Intel                                       {}

3                      1 LENOVO                                 {(default)}

198                   0 Microsoft                                {}

3                      0 MozillaPlugins                         {}

1                      0 MSIT TPM Crypto Provider       {}

1                      0 nsoftware                               {}

4                      1 NVIDIA Corporation                {nvDelFiles}

2                      0 ODBC                                     {}

4                      0 Policies                                   {}

5                      0 PowerPivot                             {}

0                      16 RegisteredApplications           {Windows Address Book, Paint, Windows Search,

1                      0 Sonic                                      {}

4                      0 Synaptics                                {}

1                      1 tdbg_trace                              {(default)}

35                     1 Wow6432Node                       {(default)}

1                      0 Xerox                                     {}

Some companies, such as Microsoft, make extensive use of certificates. A PSDrive that is useful is to create one that exposes the my certificate store for the current user. In the following command, I use the certificate provider to create a new certificate drive named mycerts.

PS C:\> New-PSDrive -Name mycerts -PSProvider certificate -Root cert:\CurrentUser\My

 

Name               Used (GB)          Free (GB)           Provider            Root

Mycerts                                                             Certificate          \CurrentUser\My

After I have the mycerts: drive, I can easily query the drive to find certificates that are going to expire in the next month:

PS C:\> Get-ChildItem mycerts: | where { $_.notafter -le "11/1/2011" } | select thumbprint, notafter

 

Thumbprint                                                                   NotAfter

4D43DC0CDFE1FDF31857FFA03120ACF4DB5C3CE6           10/1/2011 2:01:33 PM

I do not have to use a specific location when creating a new PSDrive. For example, I can use an environmental variable if I want to. In the following example, I create a new PSDrive called tmp that points to the temp folder in my profile. To get at this location, I use the $env:temp variable:

PS C:\> New-PSDrive -Name tmp -PSProvider filesystem -Root $env:temp

 

Name               Used (GB)          Free (GB)           Provider            Root

Tmp                                          47.49                FileSystem         C:\Users\edwils\AppData\Local\Temp

Well, if I can use an environmental variable when creating a new PSDrive, can I use a WMI query? In the following command, I use a WMI query to find the drive on my machine that has the greatest amount of free space. I then create a new PSDrive called data: that is located on that drive. This is a really cool way to have immediate access to the drive on your machine that has the greatest amount of free space. Here is the command:

PS C:\> New-PSDrive -Name data -PSProvider filesystem -Root (gwmi win32_logicaldisk | sort freespace

 -Descending | select deviceID -First 1).deviceID

 

Name               Used (GB)          Free (GB)           Provider            Root     CurrentLocation

Data                 100.02               47.49                FileSystem         C:\

 

After the drive is created, I can easily set my working location to that drive:

PS C:\> Set-Location data:

PS data:\>

 

By the way, the WMI query I used to find the drive with the most free space is shown here (it is useful in and of itself):

(gwmi win32_logicaldisk | sort freespace -Descending | select deviceID -First 1).deviceID 

 

Well, BP, that is all there is to playing around with new PSDrives. Join me tomorrow when I will have a guest blog article written by Boe Prox. It is cool, and you will not want to miss it.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Avoid Overload by Scaling and Queuing PowerShell Background Jobs

$
0
0

Summary: Use scaling and queuing Windows PowerShell background jobs to avoid system overload.

 

Microsoft Scripting Guy Ed Wilson here. Today I am proud to announce the return of Boe Prox to the blog.

 

Photo of Boe Prox

Boe Prox is currently a senior systems administrator with BAE Systems. He has been in the IT industry since 2003 and has spent the past three years working with VBScript and Windows PowerShell. Boe looks to script whatever he can, whenever he can. He is also a moderator on the Hey, Scripting Guy! Forum. You can check out his blog at http://boeprox.wordpress.com and also see his current project, the WSUS Administrator module, published on CodePlex.

Take it away Boe!

 

Ed asked us (the Hey, Scripting Guys! Forum moderators) what one of our favorite Windows PowerShell tricks is. After a little bit of time thinking, I decided that scaling and queuing background jobs to accomplish a task is one of my favorite tricks (and something that I have used quite a bit in the past few months).

The Windows PowerShell team blogged (link to blog post) about this topic earlier this year and explained what needs to be done in order to configure a script or function to process a number of items within a certain threshold.

I have used this technique (modified for my own use) on several occasions and it has worked like a champ each time. In fact, this is one of the key components in my project, PoshPAIG. By using this, I was able to eliminate most of the UI freeze-up that occurs when you attempt to run a Windows PowerShell command under the same thread as the UI.

Some uses for this that I have personally used are for performing a data migration where I only want to have so many copy jobs running at a time. As one job finishes up, another job begins to copy the next set of folders that I have queued. The example that I will show you uses this technique to perform a monitored reboot of a number of systems with a specific threshold of how many systems can be rebooted at a time. In this case, I will track five systems at a time and a warning will appear if a machine does not come up within five minutes of being rebooted. The script I am using is available on the TechNet Script Gallery, and I will go through it in chunks to show what is going on.

I start by running my script, named Restart-ComputerJob:

.\Restart-ComputerJob –MaxJobs  5 –InputObject (Get-Content hosts.txt)

#Define report

$Data = @()

$Start = Get-Date

#Queue the items up

$queue = [System.Collections.Queue]::Synchronized( (New-Object System.Collections.Queue) )

foreach($item in $InputObject) {

    Write-Verbose "Adding $item to queue"

    $queue.Enqueue($item)

}

 

Here, I am defining an empty collection that will be used later to store the data from each job that has finished. I have my collection of computers defined from the $InputObject variable. Each item is added to the $Queue, which was created using the System.Collections.Queue class. Using the Synchronized method allows only one job to access the queue at a time:

# Start up to the max number of concurrent jobs

# Each job will take care of running the rest

For( $i = 0; $i -lt $MaxJobs; $i++ ) {

    Restart-ServerFromQueue

}

Now that we have the collection queued, we can now begin creating jobs to start rebooting the systems in the Restart-ServerFromQueue function. I use a For statement with the $MaxJobs variable that is defined by either the user, or sticks with the default value of five to limit the number of jobs that will be run at any given time.

Function  Global:Restart-ServerFromQueue {

    $server = $queue.Dequeue()

    $j = Start-Job -Name $server -ScriptBlock {

            param($server,$location)

            $i=0

            If (Test-Connection -Computer $server -count 1 -Quiet) {

                Try {

                    Restart-Computer -ComputerName $server -Force -ea stop

                    Do {

                        Start-Sleep -Seconds 2

                        Write-Verbose "Waiting for $server to shutdown..."

                        }

                    While ((Test-Connection -ComputerName $server -Count 1 -Quiet))  

                    Do {

                        Start-Sleep -Seconds 5

                        $i++      

                        Write-Verbose "$server down...$($i)"

                        #5 minute threshold (5*60)

                        If($i -eq 60) {

                            Write-Warning "$server did not come back online from reboot!"

                            Write-Output $False

                            }

                        }

                    While (-NOT(Test-Connection -ComputerName $server -Count 1 -Quiet))

                    Write-Verbose "$Server is back up"

                    Write-Output $True

                } Catch {

                    Write-Warning "$($Error[0])"

                    Write-Output $False

                }

            } Else {

                Write-Output $False

            }

    } -ArgumentList $server

 

In the beginning part of the Restart-ServerFromQueue function, I first get the system name by using the $Queue.dequeue() method and saving it to the $Server variable that removes the system from the queue. From there, I create the new job and save the job object to a variable that will be used later. The job performs a reboot of the system and then goes into a monitoring phase until the system comes back online. If it doesn’t come back online after five minutes, the system is deemed Offline and a Boolean value of $False is returned; otherwise, if the system is online, $True is returned.

    Register-ObjectEvent -InputObject $j -EventName StateChanged -Action {

        #Set verbose to continue to see the output on the screen

        $VerbosePreference = 'continue'

        $serverupdate = $eventsubscriber.sourceobject.name 

        $results = Receive-Job -Job $eventsubscriber.sourceobject

        Write-Verbose "[$(Get-Date)]::Removing Job: $($eventsubscriber.sourceobject.Name)"          

        Remove-Job -Job $eventsubscriber.sourceobject

        Write-Verbose "[$(Get-Date)]::Unregistering Event: $($eventsubscriber.SourceIdentifier)"

        Unregister-Event $eventsubscriber.SourceIdentifier

        Write-Verbose "[$(Get-Date)]::Removing Event Job: $($eventsubscriber.SourceIdentifier)"

        Remove-Job -Name $eventsubscriber.SourceIdentifier

        If ($results) {

            Write-Verbose "[$(Get-Date)]::$serverupdate is online"

            $temp = "" | Select Computer, IsOnline

            $temp.computer = $serverupdate

            $temp.IsOnline = $True

            } Else {

            Write-Verbose "[$(Get-Date)]::$serverupdate is offline"

            $temp = "" | Select Computer, IsOnline

            $temp.computer = $serverupdate

            $temp.IsOnline = $False

            }

        $Global:Data += $temp

        If ($queue.count -gt 0 -OR (Get-Job)) {

            Write-Verbose "[$(Get-Date)]::Running Restart-ServerFromQueue"

            Restart-ServerFromQueue

        } ElseIf (@(Get-Job).count -eq 0) {

            $End = New-Timespan $Start (Get-Date)                   

            Write-Host "$('Completed in: {0}' -f $end)"

            Write-Host "Check the `$Data variable for report of online/offline systems"

            Remove-Variable Queue -Scope Global

            Remove-Variable Start -Scope Global

        }          

    } | Out-Null

    Write-Verbose "[$(Get-Date)]::Created Event for $($J.Name)"

}

The last piece of the function holds the event information that is used to track each job. I use the $j variable, which holds the job object for the most recently started job along with using the Register-ObjectEvent cmdlet and checking for the StateChanged status of the job. The job changing the status from Running to anything will prompt the registered event to perform the action defined in the –Action parameter. Because this parameter takes a script block, I can set up a series of commands to run to gather the results of the job and save it to a report. Also, I have added to this action block some commands to perform cleanup on both the job that finished and the associated event subscription. By default, I have $VerbosePreference set to Continue, which will display some extra messages after each job finishes up. You can set this to SilentlyContinue, if you do not wish to see these messages.

The following figure shows this in action.

Image of script in action

As you can see, most of the systems are offline. DC1 is the one server that does get rebooted and the job continued to monitor the server until it came back online. By the way, did I mention I like to use Write-Verbose? (Another nice tip is to use Write-Verbose in your code to track your script in action). Here you see where each system is added into the queue and also where the first five are added into a job while the sixth system patiently waits until the first job has completed. You can also see where each job finishes and another begins, which includes removing of job, event job, and the event itself. After the last job is finished, a message is displayed showing how long it took to complete all of the jobs and to check the $Data variable for a report of systems that are either offline or came back up after the reboot.

So there you have it. You can harness the power of background jobs and events to create a set of jobs that updates itself in the background without any user interaction. And it also frees up your console to perform other work while the jobs run in the background. I hope everyone enjoyed this article and can use this technique in their daily tasks or for some other project. Thanks again also to the Windows PowerShell team for their excellent article that helped pave the way in making this work!

 

I want to thank Boe Prox for taking the time to share this really cool Windows PowerShell tip.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Use PowerShell to Clean Up Active Directory After a Stalled Migration

$
0
0

Summary: Use Windows PowerShell to clean up extended user attributes following a stalled Active Directory migration.

 

Microsoft Scripting Guy Ed Wilson here. Today I am proud to announce the return of Microsoft MVP Nicolas Blank to the Hey, Scripting Guy! Blog.

Photo of Nicolas Blank

Nicolas has more than 14 years’ experience on various versions of Exchange and is a messaging architect specializing in consulting and building on-premise and cloud-based Exchange-based messaging systems as well as  the interoperations with various vendor ecosystems.

Nicolas currently holds the status of Microsoft Certified Master Exchange 2010 and Microsoft Most Valuable Professional for Microsoft Exchange since March 2007.

Nicolas writes regularly about Exchange and messaging topics on blankmanblog.com, and he provides content for the IT Pro Africa project, which contributes toward building IT in Africa.

Now, without further ado, here is Nicolas to talk to us about writing self-executing code in Windows PowerShell.

 

For this post, I’m using the free (I like free) Active Directory cmdlets from Quest, simply because for the scenario I am addressing I am processing extended attributes on user and group objects, which aren’t necessarily mail or mailbox enabled. As a rule, if your users and groups are mail or mailbox enabled, the Exchange Management Shell cmdlets could also be used, albeit with different syntax.

I am also leveraging another feature of Windows PowerShell: the ability to execute strings as code. You can do this with a cmdlet known as Invoke-Expression, but before I give too much away, let me state my case.

So here’s my premise. Active Directory Domain Services has been populated extensively during a migration, which had stalled and needed to be restarted. This means that I have a boatload of populated extended attributes that I need to preserve; however, I have others that I need to remove. This means it isn’t as simple as setting the value of every attribute to $null. I need to examine every attribute value in turn, and then decide if I should remove it or preserve it.

So simply stated, I need to examine the 15 extended attributes on the user object for one of two possible values. If the values existed and they matched a predetermined pattern, set the attribute to $null. If it did have a value, but the value “made sense” in business terms, leave it alone.

So why write code that writes code? I am glad you asked!

I will admit it—I am lazy and I do not tend to like doing the same thing more than once, if I can avoid it. In the past, I would have been content with writing large amounts of code, but in this case, because the language I am coding with is Windows PowerShell, I am able to write Windows PowerShell that writes and executes Windows PowerShell, as it is needed. The other reason is the ease of debugging one line of code instead of 15 lines of code. I will paste the complete code below, but I will discuss the pieces we need to. There is a ton of potential debugging to be done here, so I have built in output and debugging as I have gone along.

We start off by populating $Objects with the complete set of objects we want to process, which is all users and groups in the domain. Note that I am only choosing to return the attributes I need in order to be as efficient as possible; otherwise, the query may take a while and be expensive in Active Directory terms.

Note that $sizelimit is defined as a variable, so I can quickly crank up or down the numbers of objects as I am testing:

#if Sizelimit = 0 then no limits apply, otherwise limit  the number of objects to build the AD Query

$sizelimit = 0

#Only return user AND group objects without discriminating if they are mail enabled or not

$Objects = get-qadobject  -LdapFilter "(|(&(objectCategory=person)(objectclass=user))(objectclass=group))"  -sizelimit $sizelimit -includedproperties userprincipalname, ExtensionAttribute1, ExtensionAttribute2,ExtensionAttribute3, ExtensionAttribute4,ExtensionAttribute5,ExtensionAttribute6,ExtensionAttribute7,ExtensionAttribute8,ExtensionAttribute9,ExtensionAttribute10,ExtensionAttribute11,ExtensionAttribute12,ExtensionAttribute13,ExtensionAttribute14,ExtensionAttribute15 

Now that $Objects is populated, let’s generate our first bit of self-executing code. The first thing I want to do is see if there is a value to act upon. We start a loop-based counter called I$ and give it a value of one, which we increment later:

   $i = 1

    do { #this loop builds the code to form the required 15 Extention attributes and executes it

    #build the string to form the extention attribute

       $str="$"+"object.ExtensionAttribute$i";

       $AttribValue = Invoke-Expression $str

We build the string called $str:

 $str="$"+"object.ExtensionAttribute$i";

We then execute the line using Invoke-Expression and pass the value back to $AttribValue. As the loop iterates this line becomes:

$object.ExtensionAttribute1,

$object.ExtensionAttribute2,

$object.ExtensionAttribute13 ….. and so forth, up to $object.ExtensionAttribute15.

Each of these execute and populate $AttribValue in the next line:

$AttribValue = Invoke-Expression $str

This allows us to move to the next line, which effectively checks if there is a value to act upon or not:

if ($AttribValue) #If there IS a value to consider

       { Do Stuff}

Moving on—and speaking of free—this is where using the Windows PowerShell ISE comes into its own as a free editor. I am pasting a screenshot of a line of code that shows where strings start and end, which was quite critical in writing this line of code.

$Exec= "(("+$str+".StartsWith("+'"ID:"'+")) -or ("+$str+" -match("+'"^[0-9A-F]{32}$"'+'))) ';

           #$exec

        #if the next line returns true, then a match for one of the conditions if found      

          if  (Invoke-Expression $Exec)

{ Do Stuff}

 

Remembering that $str contains $object.ExtensionAttribute1,2,3, we get 15 iterations of this:

(($object.ExtensionAttribute1.StartsWith(“ID”))-or($object.ExtensionAttribute1  –match ("^[0-9A-F]{32}$")))

That is quite a mouthful. In a nutshell, look for a string that starts with “ID” or matches a regular expression pattern of uppercase letters or numbers with a string length of 32. Things get quite tricky with needing to build a string that has quotes in it. This is where we use a single quotation mark to frame a double quotation mark (in other words, ‘ ” ’), which returns one double quotation mark.

Then, iterate that for extension attribute 1,2,3,4,5,6,7…up to 15.

Again you can see how the color-coding in ISE helps to see at a glance what is quoted, and what is code vs. what is commented.

Finally, we reach the line of code that wipes the attribute. I had a challenge here of wanting to suppress errors and write the output to a log at a later date, so I used the following syntax to build a string and execute it:

$Killstring = "get-QADObject "+'"'+$object+'"'+" | set-qadobject -ObjectAttributes @{ExtensionAttribute$i="+"$"+"null}"

invoke-expression $Killstring

 

It builds code that looks like this:

get-QADObject $object | set-qadobject -ObjectAttributes @{ExtensionAttribute=$null}

I could have done two things here: write 15 sets of code with 15 points of error and debug potentially, or make an investment in code that executes itself and modifies itself according to the context of the loop. I chose the latter.

Full code follows:

#Add the Active Roles Snapin

Add-PSSnapin Quest.ActiveRoles.ADManagement

 

#if Sizelimit = 0 then no limits apply, otherwise limit the number of objects to build the AD Query

$sizelimit = 0

 

#Only return user AND group objects without discriminating if they are mail enabled or not

$Objects = get-qadobject  -LdapFilter "(|(&(objectCategory=person)(objectclass=user))(objectclass=group))"  -sizelimit $sizelimit -includedproperties userprincipalname, ExtensionAttribute1, ExtensionAttribute2,ExtensionAttribute3, ExtensionAttribute4,ExtensionAttribute5,ExtensionAttribute6,ExtensionAttribute7,ExtensionAttribute8,ExtensionAttribute9,ExtensionAttribute10,ExtensionAttribute11,ExtensionAttribute12,ExtensionAttribute13,ExtensionAttribute14,ExtensionAttribute15 

 

 

Write-host "Found this many objects: "$objects.count

 

 

#Loop through all the users and evaluate all 15 Extention attributes

$count=1 #the first attribute starts at one so start the loop at 1

Foreach ($object in $Objects)

{

$complete = (($count / $Objects.count)  * 100) # Grap a quick and dirty percentage count and output via the progress bar

Write-Progress -activity "Evaluating Objects" -status "Percent Complete: $complete" -percentComplete $complete

$count++

 

#I like to build strings like this to output the status as it's only a step away to add it to a log output

$tmpstr = "Evaluating Object Type: " + $object.classname + " Object: " + $object.ntaccountname

write-host  $tmpstr

 

    $i = 1

    do { #this loop builds the code to form the required 15 Extention attributes and executes it

    #build the string to form the extention attribute

       $str="$"+"object.ExtensionAttribute$i";

       $AttribValue = Invoke-Expression $str

       #execute it and examine it for value. it it's $null there's nothing to do and the condition returns $false

       if ($AttribValue) #If there IS a value to consider continue

       {

        #$exec contains the string which compares for two possible values, a string starting with ID: or a regex for a 32 char GUID        

           $Exec= "(("+$str+".StartsWith("+'"ID:"'+")) -or ("+$str+" -match("+'"^[0-9A-F]{32}$"'+'))) ';

           #uncomment the next line to see what string is build, for debug purposes

           #$exec

        #if the next line returns true, then a match for one of the conditions if found      

          if  (Invoke-Expression $Exec) #Check if the extention attrbute has a qualifying value, i.e. it's not null

          {

              $tmpstr= $object.NTAccountName+" Found Qualifying value in Attribute number "+$i

              write-host $tmpstr

             

              $tmpstr=  "Removing Value:" + $AttribValue

              write-host $tmpstr

             

              #build the command the wipe the affected attribute and execute it

              $Killstring = "get-QADObject "+'"'+$object+'"'+" | set-qadobject -ObjectAttributes @{ExtensionAttribute$i="+"$"+"null}"

            

              #uncomment the next line to see what string is build, for debug purposes

              #$Killstring

             

              #Execute the line of code we build above.

              invoke-expression $Killstring

          }

       } 

      

        $i++}

        while ($i -le 15)

    }   

     

$dt = Get-Date -format "ddMMyyyy_hhmm"

$tmpstr  = "Script Finished " + $dt

write-host $logfilename

 

I want to thank Nicolas for this great article. I always love seeing how IT pros use Windows PowerShell in the field to solve real-world problems.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 


Use Cool PowerShell Tricks to Simplify Your Scripts

$
0
0

Summary: Learn how to use a collection of cool Widows PowerShell tricks to simplify writing scripts.

 

Microsoft Scripting Guy Ed Wilson here. Today I am proud to announce that Microsoft's  newest Windows PowerShell MVP  Bartek Bielawski returns to the Hey, Scripting Guy! Blog. Congratulations Bartek!

Photo of Bartek Bielawski

Bartek Bielawski has been working in IT more than 10 years for one company, PAREXEL, a global organization with headquarters in U.S.

In Bartek’s opinion, Windows PowerShell is the best product from Microsoft. Bartek can be found at his blog and as a moderator on the Scripting Guys Forum.

Here is Bartek!

 

My Favorite Windows PowerShell Tricks

Working interactively with Windows PowerShell can be more convenient and effective if you know and use some tricks that the Windows PowerShell team added here and there. I would like to mention a few that I like the most.


Using scriptblocks as values for parameters that take ValueFromPipelineByPropertyName

Very often people use Foreach constructs in places where that cmdlet is not needed. This trick helps to keep code brief and logical without losing flexibility that Foreach-Object would normally give us. The syntax, as you can see, is not very different from what you would use in Foreach-Object. This simple example shows what I mean:

ls *.ps1 | Rename-Item -NewName { $_.Name -replace 'Untitled(\d)', '$1_NoName' }

Short, simple, and sweet.


Using New-Module –AsCustomObject to create more mature custom objects on-the-fly

Creating custom objects is something decent scripters usually can’t avoid. My favorite way is to use the Property parameter with a hash table. There is also a method that gives you two important (in some cases) elements that New-Object won’t give you: adding ScriptMethods and making object properties type-constrained. Sample code:

$Time = New-Module -AsCustomObject -ScriptBlock {

    [TimeSpan]$Span = 0

    function Since {

    param (

        [datetime]$Start

    )

        $Script:Span = New-TimeSpan -Start $Start

        $Span

    }

   

    function Till {

    param (

        [datetime]$End

    )

        $Script:Span = New-TimeSpan -End $End

        $Span

    }

    Export-ModuleMember -Function * -Variable *

}

 

Later, you can’t modify $Time.Span to be of type string; it will end up with exceptions (unless the string can be converted into TimeSpan).


Creating your own type accelerators.

Do you use any .NET Framework type very often? Wouldn’t you like to have it served in a similar way that some types are handled already, with friendly, short names? Well, there is a not-so-complicated way to get there. Joel Bennett (Jaykul) created a great module that I took the most important part of the code from with the Add-Accelerator function (and much, much more): 

$xlr8r = [type]::gettype("System.Management.Automation.TypeAccelerators")

$xlr8r::Add('Parser',[System.Management.Automation.PSParser])

[Parser]::Tokenize('Write-Host Foo',[ref]$null)
 

It may even work as a kind of “Using” substitute. If you create accelerators for all types in a given namespace, it will work as it would in C#. There is a command to do exactly that in Jaykul’s module.


Using descriptive errors for enums to get correct argument values

Making errors usually doesn’t help you get closer to solutions. Windows PowerShell is different, though, because many of errors you will see there will actually explain what you can do to fix them. There is one trick that requires making errors on purpose:

Set-ExecutionPolicy -ExecutionPolicy Some -Scope Any

# Error with possible ExecutionPolicy values.

Set-ExecutionPolicy -ExecutionPolicy Restricted -Scope Any

# Error with possible Scope values.

Set-ExecutionPolicy -ExecutionPolicy Restricted -Scope Process

# And now we are locked in the scriptless abyss.

Getting information about correct enumeration values in Windows PowerShell is pretty difficult when you try an “elegant” approach. Passing parameters that simply can’t work will get you there quicker. Error messages for wrong enum values used are descriptive enough to quickly find the right answer.


Using [scriptblock]::Create() to create executable code on the fly

Another issue you may walk into is how to use values passed by users to generate something you can later execute. Invoke-Expression was usually the first one to choose, but scriptblock’s static method Create gives you much more control over the code produced:

function Where-ObjectSimple {

param (

    [Parameter(Mandatory = $true)]

    [string]$Property,

    [Parameter(Mandatory = $true)]

    [string]$Operator,

    [Parameter(Mandatory = $true)]

    [string]$Pattern,

    [Parameter(ValueFromPipeline = $true,

        Mandatory = $true)]

    [PSObject]$InputObject

)

 

begin {

    $Where = @{

        FilterScript = [scriptblock]::Create("`$_.$Property -$Operator '$Pattern'")

       

    }

}

 

process {

    $InputObject | where @where

}

}

ls | Where-ObjectSimple Name like '*.ps1'

I prefer this solution over other options because it allows me to select which variables I would like to expand during creation of a script block and which should be used when a script block is used.

There are also other tiny tricks I like:

  • Using #<id><tab> and #<pattern from command><tab> to use history as a base for new commands.
  • Adding # at the start of an almost-complete command to keep it in history without actually running it (for example, to make sure we got all elements right).
  • Using Regex named/positional captures together with –match and –replace.


Conclusion

Working with Windows PowerShell is fun. It helps you get your job done without a huge amount of effort. With Windows PowerShell, you can be both entertained and surgically effective.

 

Wow! When I requested favorite Windows PowerShell tricks from the forum moderators, I was not expecting such coolness. Bartek, you rock!

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Use PowerShell to Document Your Network Configuration

$
0
0

Summary: Learn how to use Windows PowerShell and Active Directory cmdlets to document your Active Directory configuration.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I recently inherited an Active Directory. By this, I mean the network administrator quit. He did not give any notice, and it appears he took any documentation he may have created with him. He may have been abducted by Martians (there seems to be quite a bit of this going on at work) for all I know. Anyway, I need a good way to easily discover information about the domain and the forest. If I could easily print it out, it would be even better. I know how to use Active Directory Users and Computers, and I have been making screen shots, but there should be a better way of doing things. Help!

—BV

 

Hey, Scripting Guy! AnswerHello BV,

Microsoft Scripting Guy Ed Wilson here. I am sorry Martians abducted your network administrator. You did not specify which version of Windows you are running, and you did not say which version of domain controllers you have. I am guessing that perhaps you do not know. To find information about your operating system, you can use the following command in Windows PowerShell:

Get-WmiObject win32_operatingsystem

Using the Active Directory Windows PowerShell cmdlets and remoting, I can easily discover information about the forest and the domain. The first thing I need to do is to enter a PSSession on the remote computer. To do this, I use the Enter-PSSession cmdlet. Next, I import the active directory module, and set my working location to the root of the C drive. These commands are shown here:

PS C:\Users\Administrator.NWTRADERS> Enter-PSSession dc1

[dc1]: PS C:\Users\Administrator\Documents> Import-Module activedirectory

[dc1]: PS C:\Users\Administrator\Documents> Set-Location c:\

After I have connected to the remote domain controller, I can use the Get-WmiObject cmdlet to verify my operating system on that computer. This command and associated output are shown here:

[dc1]: PS C:\> Get-WmiObject win32_operatingsystem

SystemDirectory : C:\Windows\system32

Organization    :

BuildNumber     : 7601

RegisteredUser  : Windows User

SerialNumber    : 55041-507-0212466-84005

Version         : 6.1.7601

Now, I want to get information about the forest. To do this, I use the Get-ADForrest cmdlet. The output from Get-ADForest includes lots of great information such as the domain naming master, forest mode, schema master, and domain controllers. This command and associated output are shown here:

[dc1]: PS C:\> Get-ADForest 

 

ApplicationPartitions : {DC=DomainDnsZones,DC=nwtraders,DC=com, DC=ForestDnsZones,DC=nwtraders,DC=com}

CrossForestReferences : {}

DomainNamingMaster    : DC1.nwtraders.com

Domains               : {nwtraders.com}

ForestMode            : Windows2008Forest

GlobalCatalogs        : {DC1.nwtraders.com}

Name                  : nwtraders.com

PartitionsContainer   : CN=Partitions,CN=Configuration,DC=nwtraders,DC=com

RootDomain            : nwtraders.com

SchemaMaster          : DC1.nwtraders.com

Sites                 : {Default-First-Site-Name}

SPNSuffixes           : {}

UPNSuffixes           : {}

The above commands and output are shown in the following figure.

Image of commands and output

Now I am interested in obtaining information about the domain. To do this, I use the Get-ADDomain cmdlet. The command returns important information such as the location of the default domain controller organizational unit, the PDC emulator, and the RID master. The command and associated output are shown here:

[dc1]: PS C:\> Get-ADDomain

 

AllowedDNSSuffixes                 : {}

ChildDomains                       : {}

ComputersContainer                 : CN=Computers,DC=nwtraders,DC=com

DeletedObjectsContainer            : CN=Deleted Objects,DC=nwtraders,DC=com

DistinguishedName                  : DC=nwtraders,DC=com

DNSRoot                            : nwtraders.com

DomainControllersContainer         : OU=Domain Controllers,DC=nwtraders,DC=com

DomainMode                         : Windows2008Domain

DomainSID                          : S-1-5-21-909705514-2746778377-2082649206

ForeignSecurityPrincipalsContainer : CN=ForeignSecurityPrincipals,DC=nwtraders,DC=com

Forest                             : nwtraders.com

InfrastructureMaster               : DC1.nwtraders.com

LastLogonReplicationInterval       :

LinkedGroupPolicyObjects           : {CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=nwtraders,DC=com}

LostAndFoundContainer              : CN=LostAndFound,DC=nwtraders,DC=com

ManagedBy                          :

Name                               : nwtraders

NetBIOSName                        : NWTRADERS

ObjectClass                        : domainDNS

ObjectGUID                         : 0026d1fc-2e4d-4c35-96ce-b900e9d67e7c

ParentDomain                       :

PDCEmulator                        : DC1.nwtraders.com

QuotasContainer                    : CN=NTDS Quotas,DC=nwtraders,DC=com

ReadOnlyReplicaDirectoryServers    : {}

ReplicaDirectoryServers            : {DC1.nwtraders.com}

RIDMaster                          : DC1.nwtraders.com

SubordinateReferences              : {DC=ForestDnsZones,DC=nwtraders,DC=com, DC=DomainDnsZones,DC=nwtraders,DC=com, CN=Configuration,DC=nwtraders,DC=com}

SystemsContainer                   : CN=System,DC=nwtraders,DC=com

UsersContainer                     : CN=Users,DC=nwtraders,DC=com

From a security perspective, you should always check the domain password policy. To do this, use Get-ADDefaultDomainPasswordPolicy. Things you want to especially pay attention to are the use of complex passwords, minimum password length, password age, and password retention. Of course, you also need to check lockout policy, too. This one is important to review closely when inheriting a new network. Here are the command and associated output:

[dc1]: PS C:\> Get-ADDefaultDomainPasswordPolicy

 

ComplexityEnabled           : True

DistinguishedName           : DC=nwtraders,DC=com

LockoutDuration             : 00:30:00

LockoutObservationWindow    : 00:30:00

LockoutThreshold            : 0

MaxPasswordAge              : 42.00:00:00

MinPasswordAge              : 1.00:00:00

MinPasswordLength           : 7

objectClass                 : {domainDNS}

objectGuid                  : 0026d1fc-2e4d-4c35-96ce-b900e9d67e7c

PasswordHistoryCount        : 24

ReversibleEncryptionEnabled : False

The last major thing to check is the domain controllers themselves. To do this, use the Get-ADDomainController cmdlet. This command returns important information such as is the domain controller read-only, a global catalog server, operations master roles held, and operating system information. Here are the command and associated output:

 [dc1]: PS C:\> Get-ADDomainController -Identity dc1 

 

ComputerObjectDN           : CN=DC1,OU=Domain Controllers,DC=nwtraders,DC=com

DefaultPartition           : DC=nwtraders,DC=com

Domain                     : nwtraders.com

Enabled                    : True

Forest                     : nwtraders.com

HostName                   : DC1.nwtraders.com

InvocationId               : b51f625f-3f60-44e7-8577-8918f7396c2a

IPv4Address                : 10.0.0.1

IPv6Address                :

IsGlobalCatalog            : True

IsReadOnly                 : False

LdapPort                   : 389

Name                       : DC1

NTDSSettingsObjectDN       : CN=NTDS Settings,CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=nwtraders,DC=com

OperatingSystem            : Windows Server 2008 R2 Enterprise

OperatingSystemHotfix      :

OperatingSystemServicePack : Service Pack 1

OperatingSystemVersion     : 6.1 (7601)

OperationMasterRoles       : {SchemaMaster, DomainNamingMaster, PDCEmulator, RIDMaster...}

Partitions                 : {DC=ForestDnsZones,DC=nwtraders,DC=com, DC=DomainDnsZones,DC=nwtraders,DC=com, CN=Schema,CN=Configuration,DC=nwtraders,DC=com, CN=Configuration,DC=nwtraders,DC=com...}

ServerObjectDN             : CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=nwtraders,DC=com

ServerObjectGuid           : 5ae1fd0e-bc2f-42a7-af62-24377114e03d

Site                       : Default-First-Site-Name

SslPort                    : 636

BV, you asked for a report. Now that we know what type of information to expect and how to obtain it, the report is as easy as redirecting the output to a text file. The associated commands are shown here.

Get-ADForest >> \\dc1\shared\AD_Doc.txt

Get-ADDomain >> \\dc1\shared\AD_Doc.txt

Get-ADDefaultDomainPasswordPolicy >> \\dc1\shared\AD_Doc.txt

Get-ADDomainController -Identity dc1 >>\\dc1\shared\AD_Doc.txt

The file as viewed in Notepad is shown here.

Image of file viewed in Notepad

 

Well, that is all there is to quickly documenting a new domain and forest. Join me tomorrow for the quick way to create and manipulate user objects in Active Directory.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Use PowerShell Active Directory Cmdlets Without Installing Any Software

$
0
0

Summary: Learn how to use Windows PowerShell remoting to manage user objects without installing software on the client.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I was reading your article about using the Microsoft Active Directory Windows PowerShell cmdlets, and it looks really cool. The problem is that I do not want to install the Windows Remote Server Administration tools just to be able to use the Microsoft cmdlets. I guess what I am asking is this: “Is there a way to use the Microsoft Active Directory Windows PowerShell cmdlets without having to install anything extra?” If it helps you any, I am using Windows 7 Professional (64-bit) and our domain controllers are all running Windows 2008 R2. We even promoted our Active Directory so that it is in Windows 2008 R2 mode.

—KL

 

Hey, Scripting Guy! AnswerHello KL,

Microsoft Scripting Guy Ed Wilson here. Last week’s Windows PowerShell workshop in Seattle was a lot of fun. The students were really engaged and asked some great questions. My friend from Philadelphia, Pennsylvania, is out there this week doing an Exchange workshop. Anyway, during the class, I decided it would be a good idea to use Windows PowerShell remoting to perform Active Directory administration. In this way, I avoided the need to install the Remote Server Administration Tools (RSAT) on the client machine.

The first thing to do is to enter a remote Windows PowerShell session. To do this I use the Enter-PSSession cmdlet. I specify the computername and the credentials for the remote session. The credentials is an account that has the administrator rights on the remote machine. This command is shown here:

Enter-PSSession -ComputerName dc1 –credential nwtraders\administrator

If I the account I am using for my client computer also has administrator rights on the remote machine, I can leave off the credential parameter. After I have entered the session, I generally set my working location to the root of the drive so that I have more space for my commands. I then import  the ActiveDirectory module. These commands appear are shown here:

Set-Location c:\

Import-Module activedirectory

The commands and the associated output are shown in the following image. Note how I use the aliases for the commands because it makes it easier to type.

Image of commands and associated output

Now I will create a new user in Active Directory. I think I will name the user ed. The command to create a new user is simple; it is New-ADUser and the user name. The command to create a disabled user account in the users container in the default domain is shown here:

new-aduser -name ed

When the preceding command that creates a new user has completed, nothing is returned to the Windows PowerShell console. To check to ensure the user is created, use the Get-ADUser cmdlet to retrieve the user object:

Get-aduser ed

When I am certain my new user is created, I decide to create an organizational unit (OU) to store the user account. The command to create a new OU off the root of the domain is shown here:

new-ADOrganizationalUnit scripting

Just as with the previously used New-ADUser cmdlet, nothing is returned to the Windows PowerShell console. If I use the Get-ADOrganizationalUnit cmdlet, I must use a different methodology. A simple Get-ADOrganizationalUnit command returns an error; therefore, I use an LDAPFilter parameter to find the OU. The command using the LDAPFilter parameter to find my newly created OU is shown here:

Get-ADOrganizationalUnit –LDAPFilter "(name=scripting)"

The commands and associated output to create the user, get the user, create the OU, and get the OU are shown in the following figure.

Image of commands and associated output

Now that I have a new user and a new OU, I need to move the user from the users container to the newly created scripting OU. To do that, I use the Move-ADObject cmdlet. I first get the distinguishedname attribute for the scripting OU, and store it in a variable called $oupath. Next, I use the Move-ADObject cmdlet to move the ed user to the new OU. The trick here is that, whereas the Get-ADUser cmdlet is able to find a user with the name of ed, the Move-ADObject must have the distinguishedname of the ed user object in order to move it. The error that occurs when not supplying the distinguishedname appears in the following figure. I could have used the Get-ADUser cmdlet to retrieve the distinguishedname in a similar method as I did with the scripting OU, but I wanted to illustrate what the distinguishedname would look like.

Image of error shown when not supplying distinguishedname

The next thing I need to do is enable the user account. To do this, I need to first assign a password to the user account. The password must be a secure string. To do this, I can use the ConvertTo-SecureString cmdlet. By default, warnings are displayed about converting text to a secure string, but these prompts are suppressible by using the force parameter. Here is the command I use to create a secure string for a password:

$pwd = ConvertTo-SecureString -String "P@ssword1" -AsPlainText –Force

Now that I have created a secure string to use for a password for my user account, I call the Set-ADAccountPassword cmdlet to set the password. Because this is a new password, I need to use the newpassword parameter. In addition, because I do not have a previous password, I use the reset parameter. This command is shown here:

Set-ADAccountPassword -Identity ed -NewPassword $pwd –Reset

When the account has a password, I can enable the account. To do this, I use the Enable-ADAccount cmdlet and specify the user name to enable. This command is shown here:

Enable-ADAccount -Identity ed

As with the previous commands, none of the cmdlets returns any information. To ensure I have actually enabled the ed user account, I use the Get-ADUser cmdlet. In the output, I am looking for the value of the enabled property, which is a Boolean, so I am expecting the value to be true. The commands to create the secure string for a password, set the password, enable the account, and get the account are shown in the following figure along with associated output.

Image of commands and associated output

 

Well, KL, that is all there is to connecting to a domain controller, creating a user and OU, moving a user, and enabling the account. Join me tomorrow when I will continue talking about remote Active Directory management techniques.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Search Active Directory for User and Office Locations

$
0
0

Summary: Search Active Directory for user and office locations by using Windows PowerShell and no scripting.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am looking for an easy way to search Active Directory. For example, I need to find information out about users. We are moving offices in our city, and I need to find users who are in specific offices in specific cities. As soon as I know this information, I will be able to update the users to reflect the new office locations. I know this is probably too much to be asking, but hey, I am busy.

—AC

 

Hey, Scripting Guy! AnswerHello AC,

Microsoft Scripting Guy Ed Wilson here. I am completely at home with busy. Luckily, the Scripting Wife is very understanding, and shares my interest in Windows PowerShell and community. Windows PowerShell is a very powerful tool, but it can also be extremely fun to use.

When I am working from my local computer, I like to use Windows PowerShell remoting to make a connection to the domain controller. In this way, I do not have to install the Remote Server Administrator Tools (RSAT) on my Windows 7 desktop. The first thing I do is enter a Windows PowerShell session on my remote computer by using the Enter-PSSession cmdlet. Next, I set my location to the C: drive so that I free up some command-line space. The final thing to do is to import the ActiveDirectory module. This is done using the Import-Module cmdlet, and using a wildcard character to shorten the module name. The commands I type to do all this are shown here:

PS C:\Users\Administrator.NWTRADERS> Enter-PSSession dc1

[dc1]: PS C:\Users\Administrator\Documents> sl c:\

[dc1]: PS C:\> Import-Module act*

After I have connected to my domain controller and loaded the ActiveDirectory module, I decide I want to find users who have an office in Charlotte. To do this, I use the Get-ADUser cmdlet and I specify the LDAPFilter parameter and provide an LDAP dialect query to the cmdlet. The advantage of the LDAP dialect queries is that it was commonly used with VBScript and Windows PowerShell 1.0 scripts to search Active Directory. There are thousands of scripts on the Internet providing samples of query syntax. After I type my query, nothing is returned. The command is shown here:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(office=charlotte)"

The command and associated output are shown in the following figure.

Image of command and associated output

I use ADSI Edit to check the attribute name. As shown in the following figure, the actual name of the attribute is PhysicalDeliveryOfficeName, and not office. The attribute page from ADSI Edit is shown in the following figure.

Image of attribute page from ADSI Edit

When I have the actual user attribute name, the LDAPFilter is easy to use. I use the syntax attribute = value. The key is that there must be no space between the attribute name and the equal sign. It is okay to have a space between the equal sign and the value, but I do not do this because it looks funny, and it is a bad habit to add spaces when not needed in LDAPFilters. The command and associated output are shown here:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(PhysicalDeliveryOfficeName=charlotte)"

 

DistinguishedName : CN=Test1User1,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         : test1

Name              : Test1User1

ObjectClass       : user

ObjectGUID        : 21c7a4d1-74a7-4f4e-a132-8f2b3e2f91ca

SamAccountName    : Test1User1

SID               : S-1-5-21-909705514-2746778377-2082649206-3717

Surname           : user1

UserPrincipalName :

 

DistinguishedName : CN=Test1User2,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         :

Name              : Test1User2

ObjectClass       : user

ObjectGUID        : a4818269-9630-42f3-83a8-4052b7630b01

SamAccountName    : Test1User2

SID               : S-1-5-21-909705514-2746778377-2082649206-3718

Surname           :

UserPrincipalName :

 

Suppose I want to find users in the city of Atlanta. The ADSI attribute for city is the letter L. The following command shows how to find users in the city of Atlanta:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(L=Atlanta)"

 

DistinguishedName : CN=Test1User7,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         : test1

Name              : Test1User7

ObjectClass       : user

ObjectGUID        : ef4ef5b1-21c9-46ec-82fa-5754e11db1e5

SamAccountName    : Test1User7

SID               : S-1-5-21-909705514-2746778377-2082649206-3723

Surname           : user7

UserPrincipalName :

 

DistinguishedName : CN=Test1User8,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         : Test1

Name              : Test1User8

ObjectClass       : user

ObjectGUID        : f45123e7-9cad-49cc-830f-138c6d5b3c02

SamAccountName    : Test1User8

SID               : S-1-5-21-909705514-2746778377-2082649206-3724

Surname           : User8

UserPrincipalName :

 

DistinguishedName : CN=Test1User9,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         : Test1

Name              : Test1User9

ObjectClass       : user

ObjectGUID        : b688c597-3b1b-4a73-9e67-36b671db9774

SamAccountName    : Test1User9

SID               : S-1-5-21-909705514-2746778377-2082649206-3725

Surname           : User9

UserPrincipalName :

 

I do not want all the output from my command to find users in Atlanta. I only want to see user names, the city, and their physical office locations. To do this, I decide to pipe the command output to the Select-Object cmdlet and choose the desired attributes. As shown in the following output, the city is not shown:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(L=Atlanta)" -Properties PhysicalDeliveryOfficeName | select name, PhysicalDeliveryOfficeName, L

 

Name                           PhysicalDeliveryOfficeName                    L

Test1User7                    alpharetta

Test1User8                    Duluth

Test1User9                    Alpharetta

The problem is I did not add the L attribute to the list of properties I wanted to return from Active Directory. I must add the attribute to the properties I choose to return, even if I use the attribute in my LDAPFilter.

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(L=Atlanta)" -Properties PhysicalDeliveryOfficeName, l | select name, PhysicalDeliveryOfficeName, l

 

Name                                       PhysicalDeliveryOfficeName                    l

Test1User7                                alpharetta                                             Atlanta

Test1User8                                Duluth                                                   Atlanta

Test1User9                                Alpharetta                                             Atlanta

I can change my query to return only users who have the city of Atlanta and their office in Duluth. To do this, I need to create a compound LDAP filter. As shown earlier, each attribute equals value pairing is placed inside a pair of parentheses. When I add another attribute equals value pair, I must also put that inside a pair of parentheses. To state I want to look for one attribute value pair and another attribute value pair, I use an ampersand before the first pair, and surround the entire set with another pair of parentheses. The following query illustrates this technique:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(&(L=Atlanta)(PhysicalDeliveryOfficeName=Duluth))"

DistinguishedName : CN=Test1User8,OU=test1,DC=nwtraders,DC=com

Enabled           : False

GivenName         : Test1

Name              : Test1User8

ObjectClass       : user

ObjectGUID        : f45123e7-9cad-49cc-830f-138c6d5b3c02

SamAccountName    : Test1User8

SID               : S-1-5-21-909705514-2746778377-2082649206-3724

Surname           : User8

UserPrincipalName :

 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Search AD for Missing Email Addresses Using PowerShell

$
0
0

Summary: Learn how to use Windows PowerShell and the Active Directory cmdlets to find and replace missing email addresses.

 

Hey, Scripting Guy! QuestionHey, Scripting Guy! I am hoping you can help me. We recently decommissioned a domain and moved all the users from that domain into an organizational unit (OU) under our corporate domain. This has simplified maintenance, and is generally a good thing. The problem is I now need to create mail addresses for all of my users. I do not want to do this manually. Now here is the thing. We have already begun creating email addresses for some of the users in this new domain. I am hoping I can create a Windows PowerShell script that will search the OU for users that do not have an email address, and then I would like to take that list of users and create new email addresses for each of the users.

I would like to keep their user account name and append it with our top-level domain name. After the list of new email addresses has been created, I would like to apply the email addresses to the user accounts in the OU. The reason I cannot just blast through the OU and add/replace email addresses is because some of the email addresses in the OU do not conform to the username@mydomain.com format. My boss has given me a week to come up with the script; after that time, he is going to hire a consulting company to do this. It would be really great if I could get this done before that time. Can you help me?

—AN

 

Hey, Scripting Guy! AnswerHello AN,

Microsoft Scripting Guy Ed Wilson here. About three years ago, I wrote a pretty cool Windows PowerShell script that searched through Active Directory and found missing attributes. The script was not super easy, and took a little time to write (not weeks, however). That was before we had the Microsoft Active Directory cmdlets. Now, I can run a single command that locates users with missing mail attributes. In fact, I can also pipe the results of that query, and set the missing attribute in the same command. Pretty cool! You should be able to run the command, go lie on the beach for a week, and return tanned, relaxed, and a hero to your boss because you avoided hiring a consultant.

I do not have the Remote Server Administration Tools (RSAT) installed on my laptop, but with Windows PowerShell, that is not a problem. I use Windows PowerShell remoting to connect to a remote domain controller. I chose one that I knew was near me, but I could have allowed Windows PowerShell to connect to any domain controller that was not busy. I connect to the domain controller by using the Enter-PSSession command and specifying the name of the domain controller. This command is shown here:

PS C:\Users\Administrator.NWTRADERS> Enter-PSSession dc1

The next thing I do is import the ActiveDirectory module and change my working directory (to give myself a bit of extra room on the command line). These commands are shown here:

[dc1]: PS C:\Users\Administrator\Documents> Import-Module act*

[dc1]: PS C:\Users\Administrator\Documents> sl c:\

The next thing I do is create a query that returns all of the users that do not have an email address. I pipe the results to the Measure-Object cmdlet, which counts how many users do not have an email address. There are a couple of things to notice in the query. The first is that I specify the resultsetsize parameter to $null. This causes the command to return all the objects. If I wanted to only return one object, the command would be resultsetsize 1. The second thing is that the exclamation point (!) is used for the not operator. Therefore, the LDAPFilter means show me all mail attributes that do not have a mail attribute set to anything (the asterisk is the wildcard character for anything):

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(!(mail=*))" -resultSetSize $null | Measure-Object

 

Count    : 2536

Average  :

Sum      :

Maximum  :

Minimum  :

Property :

I now decide to limit my query to only the organizational unit (OU) that contains the users with the missing email addresses. To do this, I use the searchbase parameter. The command is shown here:

[dc1]: PS C:\> Get-ADUser -LDAPFilter "(!(mail=*))" -resultSetSize $null -searchbase "ou=test,dc=nwtraders,dc=com"

After I see that the query returns the appropriate user objects, I send the results to the ForEach-Object (the alias is %). Inside the ForEach-Object cmdlet, I call the Set-ADUser cmdlet to modify each active directory account that the query returns with a newly created email address. The Set-ADUser cmdlet needs to know which user to connect to, so I pipe the distinguishedname attribute to the identity parameter. The Set-ADAUser cmdlet contains an email parameter, and nothing special is required to set an email value (note the email address in Active Directory is called mail, but the cmdlet uses email to help avoid confusion). I create the email address by getting the samaccountname attribute and concatenating it with “@nwtraders.com.” The command is one logical line (I did not put any line continuation characters so as to avoid extra garbage. I also removed the PS> stuff, so that only the command remains. Obviously, you would need to modify your OU, and email suffix):

Get-ADUser -LDAPFilter "(!(mail=\.name*))" -resultSetSize $null -searchbase "ou=test,dc=nwtraders,dc=com"| % {set-aduser -identity $_.distinguishedname -email ($_.samaccountname + "@nwtraders.com")}

As shown in the following figure, the command created a new email address for the user; the command worked like a champ.

Image showing command created new email address for user

Well, AN, that is it. Thanks for an interesting question. I invite you to join me tomorrow for more Windows PowerShell goodness.

 

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy

 

 

Viewing all 3333 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>