Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Archive for the ‘Powershell’ Category

List of new features in Powershell V3

Posted by Alin D on April 11, 2012

Windows PowerShell v3 is right around the corner. It’s likely to ship with Windows 8, and it’s likely that releases for “downlevel” versions of Windows (Windows Vista, Windows 7, Windows Server 2008, and Windows Server 2008 R2 — but not Windows Server 2003 or Windows XP) will be released within a month or two, if not on the same day.

PowerShell’s becoming pretty hard to ignore. Windows Server 8 is building the vast majority of its administration on PowerShell, providing the option for GUI administration as well as command-line automation. Version 3 of the shell introduces some pretty important new features; here are the top five.

Better Remoting

PowerShell’s Remoting is becoming more important, as it gradually transitions into being the primary channel for administrative communications on the network. More and more GUI admin consoles will rely on Remoting, so it was important to Microsoft to make it more robust. Remoting sessions can now be disconnected, enabling you to later re-connect to the same session from the same or from a different computer. Currently, the Community Technology Preview (CTP) of v3 doesn’t disconnect a session if your client computer crashes; instead, that session is permanently closed — so it’s not quite like Remote Desktop, which could be configured to hold the session open if that happened.


This is a big, big, big deal of a feature. Essentially, PowerShell’s new workflow construct enables you to write something very similar to a function, with PowerShell translating your commands and script code into Windows Workflow Foundation (WWF) processes. WWF then manages your entire task — which can include surviving things like network outages, computer reboots and more. It’s a way of orchestrating long-running, complex, multi-step tasks more efficiently and reliably. And don’t be surprised if the feature gets deeply integrated into the next release of System Center Orchestrator, as well as becoming a foundation feature that’s leveraged by any number of other server products.

Updatable Help

PowerShell has always struggled with errors in the help files. Not that there were a ton of them, but it was tough for Microsoft to fix them, since doing so would basically entail issuing an OS patch. Nobody wants patches for help files! The presence of online help, hosted on the TechNet website, mitigated the problem — but only a little. In v3, help files can be updated on-demand, with new XML files downloaded from Microsoft servers whenever you like. So Microsoft can fix errors as they find them, and get you the fixes without requiring an OS service pack or patch.

Scheduled Jobs

PowerShell v2 introduced jobs, with the idea that the concept of “job” could be extended over time. In v3, a new type of job — a scheduled job — can be created to run on a schedule, or in response to an event. This isn’t all that different from Windows’ Task Scheduler, but you’ll finally have a way of accessing the capability from within PowerShell. Unlike the v2 jobs, which still exist, the scheduled jobs exist outside PowerShell, meaning they’ll continue to work even if PowerShell isn’t running.

Better Discovery

One tough part about any command-line shell is figuring out how to use it. PowerShell’s help system tackles that problem quite well, provided you know the name of the command you want, and provided you know which add-in the command lives in, and that you remembered to load the add-in into memory. Yeah, that’s a lot of caveats. PowerShell v3 addresses the problem by automatically including all commands from all installed modules when it searches for commands; try to run a command that isn’t loaded, and the shell implicitly loads it in the background for you. Ahhh… much easier. This only works with modules that are stored in one of the folder paths listed in the PSModulePath environment variable, but that variable can be modified anytime you like to include additional paths.


PowerShell has always been great for working with Windows Management Instrumentation (WMI), a Microsoft technology that builds on the more-or-less industry-standard Common Information Model (CIM). In PowerShell v3, the WMI cmdlets continue to do a great job, but they’re joined by a new set of CIM cmdlets. At first it might seem like pointless overlap in functionality, but there’s a reason: The CIM cmdlets use WS-MAN, the protocol behind PowerShell’s Remoting feature, and Microsoft’s new standard for administrative communications. WMI, which uses harder-to-pass-through-firewalls DCOM, is officially deprecated, meaning it won’t be developed further — although it’ll continue to be available. CIM will be the “way forward,” with not only additional development to what we’ve always known as “WMI,” but with the possibility for cross-platform management in the future.

Overall, PowerShell v3 is shaping up to be pretty incredible. Get your hands on CTP No. 2 here and start trying it out today.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

How to manage Hyper-V with powershell

Posted by Alin D on June 10, 2011

Many admins use PowerShell to automate components like user creation and folder permissions, but virtualization technologies can also be managed from the command line, including Microsoft Hyper-V.

While there are several ways to manage Hyper-V with PowerShell, this article will focus on the free approaches using Windows Management Instrumentation (WMI) scripting and an open source tool from CodePlex.

Before using WMI scripting to manage Hyper-V, it’s important to understand what classes are available. Microsoft’s list includes a significant number of classes and while is fairly complete, they are not necessarily easy to use and are certainly not intuitive. Therefore, using WMI to manage Hyper-V is not for the faint of heart.

One of the more popular methods for managing Hyper-V with PowerShell is with PowerShell Management Library for Hyper-V (PSHyperV) a free, open source CodePlex project written by James O’Neil. This is by far the best free option out there and gives administrators a very thorough collection of cmdlets that do everything from virtual machine inventory to virtual network management. Let’s touch on a few of them:

Get-VM — returns all the virtual machines on a given Hyper-V server

The following code demonstrates the Get-VM command:
Function Get-VM
{# .ExternalHelp MAML-VM.XML
[parameter(ValueFromPipeLine = $true)]
$Name = "%",

$Server = ".", #May need to look for VM(s) on Multiple servers
Process {
# In case people are used to the * as a wildcard...
if ($Name.count -gt 1 ) {[Void]$PSBoundParameters.Remove("Name")
; $Name | ForEach-object {Get-VM -Name $_ @PSBoundParameters}}
if ($name -is [String]) {
$Name = $Name.Replace("*","%")
# Note in V1 the test was for caption like "Virtual%" which
did not work in languages other than English.
# Thanks to Ronald Beekelaar - we now test for a processID ,
the host has a null process ID, stopped VMs have an ID of 0.
$WQL = "SELECT * FROM MSVM_ComputerSystem WHERE ElementName
LIKE '$Name' AND ProcessID >= 0"
if ($Running -or $Stopped -or $Suspended) {
$state = ""
if ($Running) {$State += " or enabledState = " +
[int][VMState]::Running }
if ($Stopped) {$State += " or enabledState = " +
[int][VMState]::Stopped }
if ($Suspended) {$State += " or enabledState = " +
[int][VMState]::Suspended }
$state = $state.substring(4)
$WQL += " AND ($state)"
Get-WmiObject -computername $Server -NameSpace $HyperVNamespace -Query $WQL | Add-Member -MemberType ALIASPROPERTY -Name "VMElementName" -Value "ElementName" -PassThru
elseif ($name.__class) {
Switch ($name.__class) {
"Msvm_ComputerSystem" {$Name}
"Msvm_VirtualSystemSettingData" {get-wmiobject -
computername $Name.__SERVER -namespace $HyperVNamespace -Query
"associators of {$($name.__path)} where
Default get-wmiobject -
computername $Name.__SERVER -namespace $HyperVNamespace -Query
"associators of {$($Name.__path)} where
resultclass=Msvm_VirtualSystemSettingData" |
{$_.getRelated("Msvm_ComputerSystem")} | Select-object -unique }

As you can see, the code basically wraps the WMI class with some helper logic and reports the results.

Get-VMSwitch — Returns all the virtual switches on the Hyper-V server

Function Get-VMSwitch
{# .ExternalHelp MAML-VMNetwork.XML
[parameter(ValueFromPipeline = $true)][Alias("Name")]

$Server = "." #Can query multiple servers for switches
process {
Get-WmiObject -computerName $server -NameSpace $HyperVNamespace
-query "Select * From MsVM_VirtualSwitch Where elementname like '$VirtualSwitchname' "

Get-VMSnapShot — Provides all the snapshots on the Hyper-V server

The following command demonstrates the Get-VMSnapShot command:

Function Get-VMSnapshot
{# .ExternalHelp MAML-VMSnapshot.XML
[parameter(Position=0 , ValueFromPipeline = $true)]
$VM = "%",


$Server="." ,
if ($VM -is [String]) {$VM=(Get-VM -Name $VM -Server $server) }
if ($VM.count -gt 1 ) {[Void]$PSBoundParameters.Remove("VM") ; $VM |
ForEach-objectGet-VMSnapshot -VM $_ @PSBoundParameters}}
if ($vm.__CLASS -eq 'Msvm_ComputerSystem') {
if ($current) {Get-wmiobject -computerNam $vm.__server -
Namespace $HyperVNamespace -q "associators of {$($vm.path)} where assocClass=MSvm_PreviousSettingData"}
else {$Snaps=Get-WmiObject -computerName $vm.__server -NameSpace $HyperVNameSpace -Query "Select * From MsVM_VirtualSystemSettingData Where systemName='$($' and
instanceID <> 'Microsoft:$($' and elementName like '$name' "
if ($newest) {$Snapssort-object -property
creationTimeselect-object -last 1 }
elseif ($root) {$snapswhere-object {$_.parent -eq
$null} }
else {$snaps}

PSHyperV includes several additional functions to help admins perform related tasks, including finding, manipulating and configuring different components of the hypervisor and can be found on the CodePlex website.

Writing WMI wrappers and using PSHyperV are just a few of the ways admins can manage Hyper-V using PowerShell. Note that the latest release of PSHyperV isn’t a complete version, and thus, isn’t as stable as other options may be.



Posted in Powershell | Tagged: , , , , , , , , , , , | Leave a Comment »

New mobile functionality for Windows PowerShell explained

Posted by Alin D on May 31, 2011

Those familiar with Windows PowerShell might also recognize PowerGUI Pro from Quest Software, a graphical front-end for PowerShell that automates common tasks for the command-line system. What you might not know is that there is new functionality that expands on this concept: PowerGUI Pro – MobileShell.

MobileShell runs the PowerGUI Pro command engine on a remote server through a Web browser. Internet Explorer 8 and Mozilla Firefox are both supported out of the box, and the programmers are working on adding support for many other browsers, including Google Chrome and Opera.
MobileShell installs on a Windows Server running Internet Information Services (IIS). It will install by default in a subdirectory named /MobileShell within the default website. All connections to MobileShell are SSL-encrypted by default, so snooping the traffic on the connection is no easier than it would be for any other SSL-protected transaction. Note that you can run MobileShell without HTTPS, but it is not recommended since (among other things) you’ll have to pass credentials in plain sight. Also, if you are disconnected in the middle of a session by a browser crash or network disruption, you can reconnect to the session spawned before in much the same manner as with a Remote Desktop session.

When you connect to MobileShell, you’ll see a three-pane display: an output window at the top, a command window at the bottom, and a pair of panels labeled Recent Commands and Favorites on the right. When you begin typing in a command in the bottom window, MobileShell will provide an auto-completion prompt for the command—a big timesaver since PowerShell commands can be a bit wordy.

The Recent Commands and Favorites panels are more or less what they sound like. The former maintains a history of the commands submitted through MobileShell. Click an item in the list and you can repopulate the command window with the same text. The Favorites panel is a list of commonly-used commands which you can customize by adjusting the settings. Among other things that can be controlled in the settings window is the output buffer size, which is set to 1,000 lines by default.

Finally, when using PowerGUI Pro – MobileShell it is important to avoid clicking the back button in your browser, as you risk closing the current session and losing your work; a minor tradeoff for another strong innovation.

Mobile Shell Pro

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

How to Import PST into Exchange 2010 with Powershell

Posted by Alin D on February 17, 2011

The process of importing multiple users’ PST files into Exchange 2010 is not as simple as perhaps you might expect, and certainly not as simple is probably should be, given how common this particular task is. To try and spread the knowledge about wrestle with this task, this article is aimed at SysAdmins wanting to migrate their users’ personal PST files into their (the users’) main exchange mailboxes. Even better, to make this as easy as possible, I’ll walk through the entire process involved, as well as creating the appropriate PowerShell scripts to semi-automate the process. Finally, to keep everything clear, I’ve split the material into three parts:

  • Importing a single PST File into Exchange 2010 RTM and SP1
  • Finding PST files on your network, and then…
  • …Importing these into Exchange (i.e. not one-by-one!)

While my solution is not necessarily Best Practice, it’s one of the best solutions I could research, and so it’s likely that many SysAdmins will come up with something similar. What you should bear in mind is that these three components I’ve described are not actually the relative “Easiest Way” of handling this importing process, as they require a non-trivial amount of tweaking, and come with their fair share of pitfalls and gotchas.


Just so we’re all on the same page, this guide has focused on using Windows Management Instrumentation (WMI) to identify files on remote user’s machines, and then import these files into their mailboxes. There are of course several options:

  1. You could ask users to manually drag mails across from within outlook.
  2. You could have group policy enforce a logon script that copies user PST files to a shared network drive and then removes said files from their system (or prevents outlook from mounting it).
    A script running on a server can then poll for new PSTs in this folder and automatically add them with the –IsArchive parameter so that the contents of the users’ local archive PSTs are available in their archive mailboxes (almost) immediately. The advantages of this approach are that you don’t need to worry about locked files (as the files can be copied before outlook has had a chance to start/lock them), or enabling WMI firewall access on client machines. However, it does still require that the user log off and on…
  3. The third (and, I think, easiest) approach is to use WMI to remotely search for files. This can generate a list of PSTs on all machines, and highlight the machines which couldn’t be searched (and which would require further attention). However, it’s highly likely that Outlook will be running on your users’ machines, making this process trickier. Naturally, WMI can be used to terminate the Outlook process remotely, but this is not ideal, and there are other ways around this problem. The advantages of this approach is that it does not require individual users to login and out (useful if a user is on holiday, for instance) – merely that the machine is on (which could be managed via Wake On LAN).

As stated, this guide focuses on the WMI-based solution and just covers the basics – more advanced scripts could be created to deal with a greater variety of error cases and configurations (e.g. shutting down outlook, making separate lists of machines to try again, detailing which PSTs had passwords and could not be imported).

How to import a PST file into Exchange.

Importing a PST file into Exchange 2010 requires the use of the Exchange Management Shell (EMS), although (somewhat confusingly) this functionality was originally included in the Exchange Management Console in the Beta release of Exchange 2010.

A PowerShell cmdlet in the EMS is used to perform the action, and the use of this cmdlet requires that the current user has Import Export roles enabled on their profile. In order to run the import and export cmdlets, Exchange 2010 RTM also requires that Outlook 2010 x64 is installed on the machine being used to run said cmdlets, although this is no longer a requirement of Exchange 2010 SP1.

So, to import a single PST file into an exchange 2010 mailbox:

  1. Install Outlook 2010 x64 on the Exchange server. Bear in mind that, by default, on a machine with no pre-existing Office installation, the DVD’s autorun setup will try to install the x86 applications. Be sure to manually run the installer rom the x64 directory to install the x64 version of Outlook, although this step is not necessary for Exchange 2010 SP1, as this now (re)includes a MAPI provider.
  2. Enable Import Permissions for a security group which your user belongs to – In this case, ‘Mailbox Support’ – with the following command:

    New-ManagementRoleAssignment -Name "Import Export Mailbox Admins" `

    -SecurityGroup "Mailbox Support" `

    -Role "Mailbox Import Export"

  3. Import the desired PST file into the appropriate user’s mailbox with the following command:

    Import-Mailbox -PSTFolderPath pstfilepath -Identity exchangealias

Exchange 2010 SP1 differs a little, in that you don’t need to install Outlook 2010 x64, and rather than the synchronous Import-Mailbox cmdlet, the asynchronous New-MailBoxImportRequest can be used, which takes the form of:

New-MailboxImportRequest -FilePath pstfilepath -Mailbox mailbox

The status of current requests can be viewed with the Get-MailboxImportRequest cmdlet, and completed requests can be cleared (or pending/In-progress requests cancelled) with the Remove-MailboxImportRequest cmdlet. One of the advantages of this new cmdlet, other than it being asynchronous, is that you can specify an additional –IsArchive parameter, which will import the PST directly into the users archive mailbox.

I did experience a few problems using these cmdlets during the brief time I spent doing research for this guide. The Exchange 2010 RTM Import-Mailbox on one system simply refused to play nicely, and kept throwing the following error:

Error was found for <Mailbox Name> because: Error occurred in the step: Approving object. An unknown error
has occurred., error code: -2147221219
    + CategoryInfo          : InvalidOperation: (0:Int32) [Import-Mailbox], RecipientTaskException
    + FullyQualifiedErrorId : CFFD629B,Microsoft.Exchange.Management.RecipientTasks.ImportMailbox

Not a lot of help in itself, although a little Googling and experimentation revealed four main potential causes for this error:

  1. The appropriate role permissions has not been added to the users security profile.
  2. The version of MAPI being used has somehow got confused, which could be fixed by running the fixmapicommand from the command prompt.
  3. The PST file is password protected.
  4. There is a bug in exchange.

In my case, I’d unfortunately hit the forth problem, and the workaround proved to be pretty horrific – it may simply be worth waiting for a fix from Microsoft. To complete my task, I had to temporarily add a new domain controller to my network to host a new Exchange 2010 server. I then moved the target mailboxes (i.e. the ones for which I had PSTs to import) across to this new server, performed the import, and then moved the mailboxes back to their original Exchange server and removed the temporary server from the network (Like I said, pretty horrific).

Upgrading the system to 2010 SP1 and using the New-MailBoxImportRequest cmdlet on the same system yielded the following error:

Couldn’t connect to the target mailbox.

+ CategoryInfo : NotSpecified: (0:Int32) [New-MailboxImportRequest], RemoteTransientException

    + FullyQualifiedErrorId : 1B0DDEBA,Microsoft.Exchange.Management.RecipientTasks.NewMailboxImportRequest

Again, this appears to be a known issue, and apparently one which is scheduled to be fixed before the final release of SP1.

Finding PST files on the network

So, we’ve seen the process for importing a local PST file into Exchange server, however, in reality, it’s likely that these PST files are scattered liberally around your network on the hard-drives of your users’ machines as a result of Outlook’s new personal archiving functionality. Ideally, so that this mass-import process is transparent to your users, you’d like some way of finding all of these PST files, pairing them up with their users, and then simply importing them into the appropriate mailbox.

There are a few steps required to set up something like that. First, we can query Active Directory for a list of all the machines attached to your domain. We can then use WMI to search each of these machines for PST files, and the file paths for these PSTs should hopefully give us a clue as to which user they belong to (by default, they will be created in a directory path containing the username.) We can also grab the file owner file attribute, which should correlate with the details in the file path.

Naturally, this technique requires that all of the machines in your network are switched on and accessible by WMI, although a list of the machines which could not be queried can be provided as an output.

Notes about WMI

By default, WMI is blocked by the windows firewall in Windows 7 and 2008 R2, so you’ll probably need to open up the ports on all of your users’ machines. This can be done with the netsh command, or through a change to group policy.

You might quite rightly be asking yourself “What are the implications of this?” WMI is a powerful beast which allows remote access to many aspects of a user’s machine. As such, it could be considered a significant security vulnerability. In addition, it’s typically accessed though port 135, which not only permits access to WMI, but also to any other DCOM components which may be installed on a machine, thus opening the way for exploitation by Trojans and the like. Needless to say, the ports are blocked by default for a reason, so carefully consider all of the implications when opening them.

WMI will also not help you if the machines you wish to tinker with are subject to NAT (Network Address Translation) – You’ll simply be unable to reach these machines.

Nevertheless, let’s assume a situation without any NAT, and where the security risks have been minimised. The following script generates a txt file (the filename defined on line 2) of all the computers on your domain to be searched. This file can then be manually edited with notepad to remove any machines you don’t wish to search:

$strCategory = "computer"

$strOutput = "c:computernames.txt"

$objDomain = New-Object System.DirectoryServices.DirectoryEntry

$objSearcher = New-Object System.DirectoryServices.DirectorySearcher

$objSearcher.SearchRoot = $objDomain

$objSearcher.Filter = ("(objectCategory=$strCategory)")

$colProplist = "name"

foreach ($i in $colPropList){$objSearcher.PropertiesToLoad.Add($i)}

$colResults = $objSearcher.FindAll()

[bool]$firstOutput = $true

foreach ($objResult in $colResults)


$objComputer = $objResult.Properties;



Write-output $ | Out-File -filepath $strOutput

$firstOutput = $false;




Write-output $ | Out-File -filepath $strOutput `




Listing 1 – A PowerShell script to generate a list of all machines on your domain which are to be searched for PST files.

The next script will generate a CSV (Comma separated values) file detailing the network paths of the PST files you need to import:

$strComputers = Get-Content -Path "c:computernames.txt"

[bool]$firstOutput = $true

foreach($strComputer in $strComputers)


$colFiles = Get-Wmiobject -namespace "rootCIMV2" `

-computername $strComputer `

-Query "Select * from CIM_DataFile `

Where Extension = ‘pst’"

foreach ($objFile in $colFiles)


if($objFile.FileName -ne $null)


$filepath = $objFile.Drive + $objFile.Path + $objFile.FileName + "." `

+ $objFile.Extension;

$query = "ASSOCIATORS OF {Win32_LogicalFileSecuritySetting=’" `

+ $filepath `

+ "’} WHERE AssocClass=Win32_LogicalFileOwner ResultRole=Owner"

$colOwners = Get-Wmiobject -namespace "rootCIMV2" `

-computername $strComputer `

-Query $query

$objOwner = $ colOwners[0]

$user = $objOwner.ReferencedDomainName + "" + $objOwner.AccountName

$output = $strComputer + "," + $filepath + "," + $user



Write-output $output | Out-File -filepath c:pstdetails.csv

$firstOutput = $false




Write-output $output | Out-File -filepath c:pstdetails.csv -append





Listing 2 – A PowerShell script to find and list network paths for PST files to be imported.

This script will take as input a text file containing a list of machine names, which is, conveniently, the output of the first script. It will then generate a .csv file of all the PST files found on those machines, and the owners associated with them. So far, so painless.

Importing the remote PSTs into Exchange

Now that we’ve seen how to gain a list of machines and their respective PST files, we now need to import these files into Exchange. The following script does just that:

# Read in pst file locations and users

$strPSTFiles = Get-Content -Path "c:pstdetails.csv"

foreach($strPSTFile in $strPSTFiles)


      $strMachine = $strPSTFile.Split(‘,’)[0]

      $strPath = $strPSTFile.Split(‘,’)[1]

      $strOwner = $strPSTFile.Split(‘,’)[2]

      # Get network path for pst file

      $source = "\" + $strMachine + "" + $strPath.Replace(‘:’,’$’)

# import pst to mail box.

Import-Mailbox -PSTFolderPath $source -Identity $strOwner

New-MailboxImportRequest -FilePath $source -Mailbox $strOwner


Listing 3 – PowerShell to import a list of PST files into Exchange from their respective machines.

The yellow highlighted text shows the Exchange 2010 RTM Cmdlet, and the red shops the Exchange 2010 SP1 (delete as appropriate).

The Exchange 2010 SP1 version of the script will execute in far less time than the original RTM version due to the asynchronous nature of the ImportRequest cmdlet. These requests are processed in the background and can be monitored with the Get-MailboxImportRequest cmdlet to observe their status. Once these have completed, as mentioned earlier, it’s necessary to actively clear the requests with the Remove-MailboxImportRequest cmdlet. As easy as this all sounds, there are quite a few potential pitfalls here:

  • The users’ machines must be on,
  • File sharing must be on to allow for the file to be transferred,
  • Outlook must not be running on the remote users’ machines – If outlook is running and has the PST file attached, then the file will be locked and unavailable for importing,
  • Passwords are not supported – The PowerShell cmdlet used by exchange to import PST files simply doesn’t handle passwords.
  • There’s a limit on concurrent requests – With the SP1 asynchronous requests, no more than 10 concurrent requests can be handled per mailbox without manually specifying a unique name for each request (this makes the script a little more complicated, but is not a showstopper, particularly given that most users will only have a single errant PST file to be imported.)

That being said, there are various things you could to augment this script; some suggestions include:

  • Having WMI shut down Outlook on remote users’ machines before attempting import.
  • Generating a further output file detailing a list of all the PSTs which failed to import, with reasons why. It would be useful to know if these files were password protected, or the machine hosting them was shut down or had disconnected since they were identified.
  • In the SP1 case, you could automate the polling of the requests statuses, and the removal of those which have completed.


So, although it’s possible to search for an import your users’ PST files into Exchange from across the network, it’s not an easy or particularly well-documented process. Frustratingly, there are also elements of the process which are directly hampered by errors and glitches.

Although none of these problems are show-stoppers, they’ll raise your blood pressure if you don’t know about them! Hopefully this guide will set you on the right track and steer you around all but the most well-concealed pitfalls.

The Really Easy Way

As I mentioned at the start, although I’ve broken down the whole process into easy-to-follow steps and pointed out where you’ll need to pay extra attention, this is not, in fact, the easiest way of handling the PST Import process. If you’d rather negate the whole problem in one fell swoop, then there are 3rd party tools which will handle the whole import process for you in a quickly and smoothly, and which will allow you to manage every aspect the import at your convenience.


Whilst I was investigating the background facts for this article, I found the following resources on the internet to be of interest:

At first I was Googling for ‘Importing PST files into Exchange’, the following pages on Experts Exchange and proved to be an interesting read.

However, as stated in part 1 of this guide, one of the systems I was testing these processes on kept throwing errors when I was trying to execute the import-mailbox cmdlet. This page proved very helpful in identifying the issue I’d hit and suggesting a workaround.

I was then faced with the problem of actually locating the PST files on the network; I found a handy page on the MSExchangeTips blog, detailing how to query WMI for network.

Posted in Exchange, Powershell | Tagged: , , , , , , | 1 Comment »

PowerShell profiles in Windows Server 2008 R2

Posted by Alin D on January 25, 2011


A PowerShell profile is a saved collection of settings for customizing the PowerShell environment. There are four types of profiles, loaded in a specific order each time PowerShell starts. The following sections explain these profile types, where they should be located, and the order in which they are loaded.

The All Users Profile

This profile is located in %windir% system32 windowspowershell v1.0 profile.ps1. Settings in the All Users profile are applied to all PowerShell users on the current machine. If you plan to configure PowerShell settings across the board for users on a machine, this is the profile to use.

The All Users Host-Specific Profile

This profile is located in %windir% system32 windowspowershell v1.0 ShellID_profile.ps1. Settings in the All Users host-specific profile are applied to all users of the current shell (by default, the PowerShell console). PowerShell supports the concept of multiple shells or hosts. For example, the PowerShell console is a host and the one most users use exclusively. However, other applications can call an instance of the PowerShell runtime to access and run PowerShell commands and scripts. An application that does this is called a hosting application and uses a host-specific profile to control the PowerShell configuration. The host-specific profile name is reflected by the host’s ShellID. In the PowerShell console, the ShellID is the following:

PS C: $ShellId

Putting this together, the PowerShell console’s All Users host-specific profile is named Microsoft.PowerShell_profile.ps1. For other hosts, the ShellID and All Users host-specific profile names are different. For example, the PowerShell Analyzer ( is a PowerShell host that acts as a rich graphical interface for the PowerShell environment. Its ShellID is PowerShellAnalyzer.PSA, and its All Users host-specific profile name isPowerShellAnalyzer.PSA_profile.ps1.

The Current User’s Profile

This profile is located in %userprofile% My Documents WindowsPowerShell profile.ps1. Users who want to control their own profile settings can use the current user’s profile. Settings in this profile are applied only to the user’s current PowerShell session and don’t affect any other users.

The Current User’s Host-Specific Profile

This profile is located in %userprofile% My Documents WindowsPowerShell ShellID_profile.ps1. Like the All Users host-specific profile, this profile type loads settings for the current shell. However, the settings are user specific.

When PowerShell is started for the first time, you might see a message indicating that scripts are disabled and no profiles are loaded. This behavior can be modified by changing the PowerShell execution policy.


When WSH was released with Windows 98, it was a godsend for Windows administrators who wanted the same automation capabilities as their UNIX brethren. At the same time, virus writers quickly discovered that WSH also opened up a large attack vector against Windows systems.

Almost anything on a Windows system can be automated and controlled by using WSH, which is an advantage for administrators. However, WSH doesn’t provide any security in script execution. If given a script, WSH runs it. Where the script comes from or its purpose doesn’t matter. With this behavior, WSH became known more as a security vulnerability than an automation tool.

Execution Policies

Because of past criticisms of WSH’s security, when the PowerShell team set out to build a Microsoft shell, the team decided to include an execution policy to mitigate the security threats posed by malicious code. An execution policy defines restrictions on how PowerShell allows scripts to run or what configuration files can be loaded. PowerShell has four primary execution policies, discussed in more detail in the following sections: Restricted, AllSigned, RemoteSigned, and Unrestricted.

Execution policies can be circumvented by a user who manually executes commands found in a script file. Therefore, execution policies are not meant to replace a security system that restricts a user’s actions and instead should be viewed as a restriction that attempts to prevent malicious code from being executed.

Restricted By default, PowerShell is configured to run under the Restricted execution policy. This execution policy is the most secure because it allows PowerShell to operate only in an interactive mode. This means no scripts can be run, and only configuration files digitally signed by a trusted publisher are allowed to run or load.

AllSigned The AllSigned execution policy is a notch under Restricted. When this policy is enabled, only scripts or configuration files that are digitally signed by a publisher you trust can be run or loaded. Here’s an example of what you might see if the AllSigned policy has been enabled:

PS C: Scripts> . evilscript.ps1
The file C: Scripts evilscript.ps1 cannot be loaded. The file
C: Scripts evilscript.ps1 is not digitally signed. The script will not
execute on the system. Please see "get-help about_signing" for more
At line:1 char:16
+ . evilscript.ps1 < < < <
PS C: Scripts>

Signing a script or configuration file requires a code-signing certificate. This certificate can come from a trusted certificate authority (CA), or you can generate one with the Certificate Creation Tool (Makecert.exe). Usually, however, you want a valid code-signing certificate from a well-known trusted CA, such as VeriSign, Thawte, or your corporation’s internal Public Key Infrastructure (PKI). Otherwise, sharing your scripts or configuration files with others might be difficult because your computer isn’t a trusted CA by default.

RemoteSigned The RemoteSigned execution policy is designed to prevent remote PowerShell scripts and configuration files that aren’t digitally signed by a trusted publisher from running or loading automatically. Scripts and configuration files that are locally created can be loaded and run without being digitally signed, however.

A remote script or configuration file can be obtained from a communication application, such as Microsoft Outlook, Internet Explorer, Outlook Express, or Windows Messenger. Running or loading a file downloaded from any of these applications results in the following error message:

PS C: Scripts> . interscript.ps1
The file C: Scripts interscript.ps1 cannot be loaded. The file
C: Scripts interscript.ps1 is not digitally signed. The script will not execute on
the system. Please see "get-help about_signing" for more details..
At line:1 char:17
+ . interscript.ps1 < < < <
PS C: Scripts>

To run or load an unsigned remote script or configuration file, you must specify whether to trust the file. To do this, right-click the file in Windows Explorer and click Properties. On the General tab, click the Unblock button (see Figure 21.2).

After you trust the file, the script or configuration file can be run or loaded. If it's digitally signed but the publisher isn't trusted, however, PowerShell displays the following prompt:

PS C: Scripts> . signed.ps1

Do you want to run software from this untrusted publisher?

FIGURE 21.2 Trusting a remote script or configuration file.

File C: Scripts signed.ps1 is published by, OU=IT,, L=Oakland, S=California, C=US and is not trusted on your
system. Only run scripts from trusted publishers.
[V] Never run [D] Do not run [R] Run once [A] Always run [?] Help
(default is "D"):

In this case, you must choose whether to trust the file content.

Unrestricted As the name suggests, the Unrestricted execution policy removes almost all restrictions for running scripts or loading configuration files. All local or signed trusted files can run or load, but for remote files, PowerShell prompts you to choose an option for running or loading that file, as shown here:

PS C: Scripts> . remotescript.ps1

Security Warning
Run only scripts that you trust. While scripts from the Internet can be useful,
this script can potentially harm your computer. Do you want to run
C: Scripts remotescript.ps1?
[D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"):

In addition to the primary execution policies, two new execution policies were introduced in PowerShell 2.0, as discussed in the following sections.


When this execution policy is used, nothing is blocked and there is no warning or prompts. This execution policy is typically used when PowerShell is being used by another application that has its own security model or a PowerShell script has been embedded into another application.


When this execution policy is defined, it means that there is no execution policy set in the current scope. If Undefined is the execution policy for all scopes, the effective execution policy is Restricted.

Setting the Execution Policy

By default, when PowerShell is first installed, the execution policy is set to Restricted. To change the execution policy, you use the Set-ExecutionPolicy cmdlet, shown here:

PS C: > set-executionpolicy AllSigned

Or, you can also use a Group Policy setting to set the execution policy for number of computers. In a PowerShell session, if you want to know the current execution policy for a machine, use the Get-ExecutionPolicy cmdlet:

PS C: > get-executionpolicy
PS C: >

Execution policies can not only be defined for the local machine, but can also be defined for the current user or a particular process. These boundaries between where an execution policy resides is called an execution policy scope. To define the execution policy for a scope, you would use the Scope parameter for the Set-ExecutionPolicy cmdlet. Additionally, if you wanted to know the execution policy for a particular scope, you would use the Scopeparameter for the Get-ExecutionPolicy cmdlet. The valid arguments for the Scopeparameter for both cmdlets are Machine Policy, User Policy, Process, CurrentUser, and LocalMachine.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

Using PowerShell ISE and alias cmdlets

Posted by Alin D on January 25, 2011

PowerShell ISE

Another new feature that was introduced in PowerShell 2.0 is called the Integrated Scripting Environment (ISE). The ISE, as shown in Figure 21.1, is a Windows Presentation Foundation (WPF)–based host application for Windows PowerShell. Using the ISE, an IT professional can both run commands and write, test, and debug scripts.

Additional features of the ISE include the following:

  • A Command pane for running interactive commands.
  • A Script pane for writing, editing, and running scripts. You can run the entire script or selected lines from the script.
  • A scrollable Output pane that displays a transcript of commands from the Command and Script panes and their results.
  • Up to eight independent PowerShell execution environments in the same window, each with its own Command, Script, and Output panes.
  • Multiline editing in the Command pane, which lets you paste multiple lines of code, run them, and then recall them as a unit.
  • A built-in debugger for debugging commands, functions, and scripts.

FIGURE 21.1 The PowerShell ISE.

  • Customizable features that let you adjust the colors, font, and layout.
  • A scriptable object model that lets you further customize and extend the PowerShell ISE.
  • Line and column numbers, keyboard shortcuts, tab completion, context-sensitive Help, and Unicode support.

The PowerShell ISE is an optional feature in Windows Server 2008 R2. To use the ISE, it first must be installed using the Add Features Wizard. Because the ISE requires the .NET Framework 3.5 with Service Pack 1, the Server Manager will also install this version of the .NET Framework if it is not already installed. Once installed, use either of the following methods to start it:

  1. Start Windows PowerShell ISE by clicking Start, All Programs, Accessories, Windows PowerShell, and then click Windows PowerShell ISE or Windows PowerShell ISE (x86).
  2. Or execute the powershell_ise.exe executable.

ISE Requirements

The following requirements must be met to use the ISE:

  • Windows XP and later versions of Windows
  • Microsoft .NET Framework 3.5 with Service Pack 1

Being a GUI-based application, the PowerShell ISE does not work on Server Core installations of Windows Server.


A variable is a storage place for data. In most shells, the only data that can be stored in a variable is text data. In advanced shells and programming languages, data stored in variables can be almost anything, from strings to sequences to objects. Similarly, PowerShell variables can be just about anything.

To define a PowerShell variable, you must name it with the $ prefix, which helps delineate variables from aliases, cmdlets, filenames, and other items a shell operator might want to use. A variable name can contain any combination of alphanumeric characters (a–z and 0–9) and the underscore (_) character. Although PowerShell variables have no set naming convention, using a name that reflects the type of data the variable contains is recommended, as shown in this example:

PS C: > $Stopped = get-service | where {$_.status -eq "stopped"}
PS C: > $Stopped

Status Name DisplayName
Stopped ALG Application Layer Gateway Service
Stopped Appinfo Application Information
Stopped AppMgmt Application Management
Stopped aspnet_state ASP.NET State Service
Stopped AudioEndpointBu… Windows Audio Endpoint Builder
Stopped Audiosrv Windows Audio


As you can see from the previous example, the information that is contained within the$Stopped variable is a collection of services that are currently stopped.

A variable name can consist of any characters, including spaces, provided the name is enclosed in curly braces ({ and } symbols).


Like most existing command-line shells, command aliases can be defined in PowerShell. Aliasing is a method that is used to execute existing shell commands (cmdlets) using a different name. In many cases, the main reason aliases are used is to establish abbreviated command names in an effort to reduce typing. For example:

PS C: > gps | ? {$_.Company -match ".*Microsoft*"} | ft Name, ID, Path --Autosize

The preceding example shows the default aliases for the Get-Process, Where-Object, and Format-Table cmdlets.

Alias cmdlets

In PowerShell, several alias cmdlets enable an administrator to define new aliases, export aliases, import aliases, and display existing aliases. By using the following command, an administrator can get a list of all the related alias cmdlets:

PS C: > get-command *-Alias

CommandType Name Definition
Cmdlet Export-Alias Export-Alias [-Path]
Cmdlet Get-Alias Get-Alias [[-Name]
Cmdlet Import-Alias Import-Alias [-Path]
Cmdlet New-Alias New-Alias [-Name] [...
Cmdlet Set-Alias Set-Alias [-Name] [...

Use the Get-Alias cmdlet to produce a list of aliases available in the current PowerShell session. The Export-Alias and Import-Alias cmdlets are used to export and import alias lists from one PowerShell session to another. Finally, the New-Alias and Set-Alias cmdlets allow an administrator to define new aliases for the current PowerShell session.

Creating Persistent Aliases

The aliases created when using the New-Alias and Set-Alias cmdlets are valid only in the current PowerShell session. Exiting a PowerShell session discards any existing aliases. To have aliases persist across PowerShell sessions, they can be defined in a profile file, as shown in this example:

set-alias new new-object
set-alias time get-date

Although command shortening is appealing, the extensive use of aliases isn't recommended. One reason is that aliases aren't very portable in relation to scripts. For example, if a lot of aliases are used in a script, each alias must be included via a Set-Aliases sequence at the start of the script to make sure those aliases are present, regardless of machine or session profile, when the script runs.

However, a bigger concern than portability is that aliases can often confuse or obscure the true meaning of commands or scripts. The aliases that are defined might make sense to a scripter, but not everyone shares the logic in defining aliases. So if a scripter wants others to understand their scripts, they shouldn't use too many aliases.

If aliases will be used in a script, use names that other people can understand. For example, there's no reason, other than to encode a script, to create aliases consisting of only two letters.


A scope is a logical boundary in PowerShell that isolates the use of functions and variables. Scopes can be defined as global, local, script, and private. They function in a hierarchy in which scope information is inherited downward. For example, the local scope can read the global scope, but the global scope can't read information from the local scope. Scopes and their use are described in the following sections.


As the name indicates, a global scope applies to an entire PowerShell instance. Global scope data is inherited by all child scopes, so any commands, functions, or scripts that run make use of variables defined in the global scope. However, global scopes are not shared between different instances of PowerShell.

The following example shows the $Processes variable being defined as a global variable in the ListProcesses function. Because the $Processes variable is being defined globally, checking $Processes.Count after ListProcesses completes returns a count of the number of active processes at the time ListProcesses was executed:

PS C: > function ListProcesses {$Global:Processes = get-process}
PS C: > ListProcesses
PS C: > $Processes.Count

In PowerShell, an explicit scope indicator can be used to determine the scope a variable resides in. For instance, if a variable is to reside in the global scope, it should be defined as
$Global:variablename. If an explicit scope indicator isn’t used, a variable resides in the current scope for which it’s defined.


A local scope is created dynamically each time a function, filter, or script runs. After a local scope has finished running, information in it is discarded. A local scope can read information from the global scope but can't make changes to it.

The following example shows the locally scoped variable $Processes being defined in the ListProcesses function. After ListProcesses finishes running, the $Processesvariable no longer contains any data because it was defined only in the ListProcessesfunction. Notice how checking $Processes.Count after the ListProcesses function is finished produces no results:

PS C: > function ListProcesses {$Processes = get-process}
PS C: > ListProcesses
PS C: > $Processes.Count
PS C: >


A script scope is created whenever a script file runs and is discarded when the script finishes running. To see an example of how a script scope works, create the following script and save it as ListProcesses.ps1:

$Processes = get-process
write-host "Here is the first process:" -Foregroundcolor Yellow

After creating the script file, run it from a PowerShell session. The output should look similar to this example:

PS C: > .ListProcesses.ps1
Here is the first process:

Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName
105 5 1992 4128 32 916 alg

PS C:> $Processes[0]
Cannot index into a null array.
At line:1 char:12
+ $Processes[0 <<<< ]
PS C:>

Notice that when the ListProcesses.ps1 script runs, information about the first process object in the $Processes variable is written to the console. However, when you try to access information in the $Processes variable from the console, an error is returned because the $Processes variable is valid only in the script scope. When the script finishes running, that scope and all its contents are discarded.

What if an administrator wants to use a script in a pipeline or access it as a library file for common functions? Normally, this isn't possible because PowerShell discards a script scope whenever a script finishes running. Luckily, PowerShell supports the dot-sourcing technique, a term that originally came from UNIX. Dot sourcing a script file tells PowerShell to load a script scope into the calling parent's scope.

To dot source a script file, simply prefix the script name with a period (dot) when running the script, as shown here:

PS C: > . .coolscript.ps1


A private scope is similar to a local scope, with one key difference: Definitions in the private scope aren't inherited by any child scopes.

The following example shows the privately scoped variable $Processes defined in the ListProcesses function. Notice that during execution of the ListProcesses function, the $Processes variable isn't available to the child scope represented by the script block enclosed by { and } in lines 6--9.

PS C:> function ListProcesses {$Private:Processes = get-process
>>    write-host "Here is the first process:" -Foregroundcolor Yellow
>>    $Processes[0]
>>    write-host
>>>>     &{
>>      write-host "Here it is again:" -Foregroundcolor Yellow
>>      $Processes[0]
>>     }
>>  }
>>PS C:> ListProcesses
Here is the first process:

Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName
105 5 1992 4128 32 916 alg

Here it is again:
Cannot index into a null array.
At line:7 char:20

+           $Processes[0 <<<< ]
PS C:>

Providers and Drives

Most computer systems are used to store data, often in a structure such as a file system. Because of the amount of data stored in these structures, processing and finding information can be unwieldy. Most shells have interfaces, or providers, for interacting with data stores in a predictable, set manner. PowerShell also has a set of providers for presenting the contents of data stores through a core set of cmdlets. You can then use these cmdlets to browse, navigate, and manipulate data from stores through a common interface. To get a list of the core cmdlets, use the following command:

PS C:> help about_core_commands

To view built-in PowerShell providers, use the following command:

PS C:> get-psprovider

Name Capabilities Drives
WSMan Credentials {WSMan}
Alias ShouldProcess {Alias}
Environment ShouldProcess {Env}
FileSystem Filter, ShouldProcess {C, D, E}
Function ShouldProcess {Function}
Registry ShouldProcess, Transactions {HKLM, HKCU}
Variable ShouldProcess, Transactions {Variable}
Certificate ShouldProcess, Transactions {cert}

PS C:>

The preceding list displays not only built-in providers, but also the drives each provider currently supports. A drive is an entity that a provider uses to represent a data store through which data is made available to the PowerShell session. For example, the Registry provider creates a PowerShell drive for the HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER Registry hives.

To see a list of all current PowerShell drives, use the following command:

PS C:> get-psdrive

Name Used (GB) Free (GB) Provider (GB) Root(GB)
Alias Alias
C 68.50 107.00 FileSystem C:
cert Certificate
D 8.98 1.83 FileSystem D:
E FileSystem E:
Env Environment
Function Function
Variable Variable

PS C:>

This example works because it uses the & call operator. With this call operator, you can execute fragments of script code in an isolated local scope. This technique is helpful for isolating a script block and its variables from a parent scope or, as in this example, isolating a privately scoped variable from a script block.

Posted in Powershell | Tagged: , , , , , , | 1 Comment »

PowerShell remoting in Windows Server 2008 R2

Posted by Alin D on January 25, 2011


With PowerShell 1.0, one of its major disadvantages was the lack of an interface to execute commands on a remote machine. Granted, you could use Windows Management Instrumentation (WMI) to accomplish this and some cmdlets like Get-Process and Get-Service, which enable you to connect to remote machines. But, the concept of a nativebased “remoting” interface was sorely missing when PowerShell was first released. In fact, the lack of remote command execution was a glaring lack of functionality that needed to be addressed. Naturally, the PowerShell product team took this functionality limitation to heart and addressed it by introducing a new feature in PowerShell 2.0, called “remoting.”

Remoting, as its name suggests, is a new feature that is designed to facilitate command (or script) execution on remote machines. This could mean execution of a command or commands on one remote machine or thousands of remote machines (provided you have the infrastructure to support this). Additionally, commands can be issued synchronously or asynchronously, one at time or through a persistent connection called a runspace, and even scheduled or throttled.

To use remoting, you must have the appropriate permissions to connect to a remote machine, execute PowerShell, and execute the desired command(s). In addition, the remote machine must have PowerShell 2.0 and Windows Remote Management (WinRM) installed, and PowerShell must be configured for remoting.

Additionally, when using remoting, the remote PowerShell session that is used to execute commands determines execution environment. As such, the commands you attempt to execute are subject to a remote machine’s execution policies, profiles, and preferences.

Commands that are executed against a remote machine do not have access to information defined within your local profile. As such, commands that use a function or alias defined in your local profile will fail unless they are defined on the remote machine as well.

How Remoting Works

In its most basic form, PowerShell remoting works using the following conversation flow between “a client” (most likely the machine with your PowerShell session) and “a server” (remote host) that you want to execute command(s) against:

  1. A command is executed on the client.
  2. That command is transmitted to the server.
  3. The server executes the command and then returns the output to the client.
  4. The client displays or uses the returned output.

At a deeper level, PowerShell remoting is very dependent on WinRM for facilitating the command and output exchange between a “client” and “server.” WinRM, which is a component of Windows Hardware Management, is a web-based service that enables administrators to enumerate information on and manipulate a remote machine. To handle remote sessions, WinRM was built around a SOAP-based standards protocol called WS-Management. This protocol is firewall-friendly, and was primarily developed for the exchange of management information between systems that might be based on a variety of operating systems on various hardware platforms.

When PowerShell uses WinRM to ship commands and output between a client and server, that exchange is done using a series of XML messages. The first XML message that is exchanged is a request to the server, which contains the desired command to be executed. This message is submitted to the server using the SOAP protocol. The server, in return, executes the command using a new instance of PowerShell called a runspace. Once execution of the command is complete, the output from the command is returned to the requesting client as the second XML message. This second message, like the first, is also communicated using the SOAP protocol.

This translation into an XML message is performed because you cannot ship “live” .NET objects (how PowerShell relates to programs or system components) across the network. So, to perform the transmission, objects are serialized into a series of XML (CliXML) data elements. When the server or client receives the transmission, it converts the received XML message into a deserialized object type. The resulting object is no longer live. Instead, it is a record of properties based on a point in time and, as such, no longer possesses any methods.

Remoting Requirements

To use remoting, both the local and remote computers must have the following:

  • Windows PowerShell 2.0 or later
  • Microsoft .NET Framework 2.0 or later
  • Windows Remote Management 2.0

Windows Remote Management 2.0 is part of Windows 7 and Windows Server 2008 R2. For down-level versions of Windows, an integrated installation package must be installed, which includes PowerShell 2.0.

Configuring Remoting

By default, WinRM is installed on all Windows Server 2008 R2 machines as part of the default operating system installation. However, for security purposes, PowerShell remoting and WinRM are, by default, configured to not allow remote connections. You can use several methods to configure remoting, as described in the following sections.

Method One The first and easiest method to enable PowerShell remoting is to execute the Enable-PSRemoting cmdlet. For example:

PS C: > enable-pssremoting

Once executed, the following tasks are performed by the Enable-PSRemoting cmdlet:

  • Runs the Set-WSManQuickConfig cmdlet, which performs the following tasks:
    • Starts the WinRM service.
    • Sets the startup type on the WinRM service to Automatic.
    • Creates a listener to accept requests on any IP address.
    • Enables a firewall exception for WS-Management communications.
  • Enables all registered Windows PowerShell session configurations to receive instructions from a remote computer.
  • Registers the “Microsoft.PowerShell” session configuration, if it is not already registered.
  • Registers the “Microsoft.PowerShell32″ session configuration on 64-bit computers, if it is not already registered.
  • Removes the “Deny Everyone” setting from the security descriptor for all the registered session configurations.
  • Restarts the WinRM service to make the preceding changes effective.

To configure PowerShell remoting, the Enable-PSRemoting cmdlet must be executed using the Run As Administrator option.

Method Two The second method to configure remoting is to use Server Manager. Use the following steps to use this method:

  1. Open Server Manager.
  2. In the Server Summary area of the Server Manager home page, click Configure Server Manager Remote Management.
  3. Next, select Enable Remote Management of This Server from Other Computers.
  4. Click OK.

Method Three Finally, the third method to configure remoting is to use GPO. Use the following steps to use this method:

  1. Create a new GPO, or edit an existing one.
  2. Expand Computer Configuration, Policies, Administrative Templates, Windows Components, Windows Remote Management, and then select WinRM Service.
  3. Open the Allow Automatic Configuration of Listeners Policy, select Enabled, and then define the IPv4 filter and IPv6 filter as *.
  4. Click OK.
  5. Next, expand Computer Configuration, Policies, Windows Settings, Security Settings, Windows Firewall with Advanced Security, Windows Firewall with Advanced Security, and then Inbound Rules.
  6. Right-click Inbound Rules, and then click New Rule.
  7. In the New Inbound Rule Wizard, on the Rule Type page, select Predefined.
  8. On the Predefined pull-down menu, select Remote Event Log Management. Click Next.
  9. On the Predefined Rules page, click Next to accept the new rules.
  10. On the Action page, select Allow the Connection, and then click Finish. Allow the Connection is the default selection.
  11. Repeat steps 6 through 10 and create inbound rules for the following predefined rule types:
  • Remote Service Management
  • Windows Firewall Remote Management

Background Jobs

Another new feature that was introduced in PowerShell 2.0 is the ability to use background jobs. By definition, a background job is a command that is executed asynchronously without interacting with the current PowerShell session. However, once the background job has finished execution, the results from these jobs can then be retrieved and manipulated based on the task at hand. In other words, by using a background job, you can complete automation tasks that take an extended period of time to run without impacting the usability of your PowerShell session.

By default, background jobs can be executed on the local computer. But, background jobs can also be used in conjunction with remoting to execute jobs on a remote machine.

To use background jobs (local or remote), PowerShell must be configured for remoting.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

PowerShell commands in Windows Server 2008 R2

Posted by Alin D on January 25, 2011

Shells are a necessity in using operating systems. They give the ability to execute arbitrary commands as a user and the ability to traverse the file system. Anybody who has used a computer has dealt with a shell by either typing commands at a prompt or clicking an icon to start a word processing application. A shell is something that every user uses in some fashion. It’s inescapable in whatever form when working on a computer system.

Until now, Windows users and administrators primarily have used the Windows Explorer or cmd command prompt (both shells) to interact with most versions of the Window operating systems. With Microsoft’s release of PowerShell, both a new shell and scripting language, the current standard for interacting with and managing Windows is rapidly changing. This change became very evident with the release of Microsoft Exchange Server 2007, which used PowerShell as its management backbone, the addition of PowerShell as a feature within Windows Server 2008, and now the inclusion of PowerShell as part of the Windows 7 and Windows Server 2008 R2 operating systems.

In this chapter, we take a closer look at what shells are and how they have developed. Next, we review Microsoft’s past attempts at providing an automation interface (WSH) and then introduce PowerShell. From there, we step into understanding the PowerShell features and how to use it to manage Windows 2008. Finally, we review some best practices for using PowerShell.

Understanding Shells

A shell is an interface that enables users to interact with the operating system. A shell isn’t considered an application because of its inescapable nature, but it’s the same as any other process running on a system. The difference between a shell and an application is that a shell’s purpose is to enable users to run other applications. In some operating systems (such as UNIX, Linux, and VMS), the shell is a command-line interface (CLI); in other operating systems (such as Windows and Mac OS X), the shell is a graphical user interface (GUI).

Both CLI and GUI shells have benefits and drawbacks. For example, most CLI shells allow powerful command chaining (using commands that feed their output into other commands for further processing; this is commonly referred to as the pipeline). GUI shells, however, require commands to be completely self-contained. Furthermore, most GUI shells are easy to navigate, whereas CLI shells require a preexisting knowledge of the system to avoid attempting several commands to discern the location and direction to head in when completing an automation task. Therefore, choosing which shell to use depends on your comfort level and what’s best suited to perform the task at hand.

Even though GUI shells exist, the term “shell” is used almost exclusively to describe a command-line environment, not a task that is performed with a GUI application, such as Windows Explorer. Likewise, shell scripting refers to collecting commands normally entered on the command line or into an executable file.

A Short History of Shells

The first shell in wide use was the Bourne shell, the standard user interface for the UNIX operating system; UNIX systems still require it for booting. This robust shell provided pipelines and conditional and recursive command execution. It was developed by C programmers for C programmers.

Oddly, however, despite being written by and for C programmers, the Bourne shell didn’t have a C-like coding style. This lack of similarity to the C language drove the invention of the C shell, which introduced more C-like programming structures. While the C shell inventors were building a better mousetrap, they decided to add command-line editing and command aliasing (defining command shortcuts), which eased the bane of every UNIX user’s existence: typing. The less a UNIX user has to type to get results, the better.

Although most UNIX users liked the C shell, learning a completely new shell was a challenge for some. So, the Korn shell was invented, which added a number of the C shell features to the Bourne shell. Because the Korn shell is a commercially licensed product, the open source software movement needed a shell for Linux and FreeBSD. The collaborative result was the Bourne Again shell, or Bash, invented by the Free Software Foundation.

Throughout the evolution of UNIX and the birth of Linux and FreeBSD, other operating systems were introduced along with their own shells. Digital Equipment Corporation (DEC) introduced Virtual Memory System (VMS) to compete with UNIX on its VAX systems. VMS had a shell called Digital Command Language (DCL) with a verbose syntax, unlike that of its UNIX counterparts. Also, unlike its UNIX counterparts, it wasn’t case sensitive, nor did it provide pipelines.

Somewhere along the way, the PC was born. IBM took the PC to the business market, and Apple rebranded roughly the same hardware technology and focused on consumers. Microsoft made DOS run on the IBM PC, acting as both kernel and shell and including some features of other shells. (The pipeline syntax was inspired by UNIX shells.)

Following DOS was Windows, which went from application to operating system quickly. Windows introduced a GUI shell, which has become the basis for Microsoft shells ever since. Unfortunately, GUI shells are notoriously difficult to script, so Windows provided a DOSShell-like environment. It was improved with a new executable, cmd.exe instead of, and a more robust set of command-line editing features. Regrettably, this change also meant that shell scripts in Windows had to be written in the DOSShell syntax for collecting and executing command groupings.

Over time, Microsoft realized its folly and decided systems administrators should have better ways to manage Windows systems. Windows Script Host (WSH) was introduced in Windows 98, providing a native scripting solution with access to the underpinnings of Windows. It was a library that allowed scripting languages to use Windows in a powerful and efficient manner. WSH is not its own language, however, so a WSH-compliant scripting language was required to take advantage of it, such as JScript, VBScript, Perl, Python, Kixstart, or Object REXX. Some of these languages are quite powerful in performing complex processing, so WSH seemed like a blessing to Windows systems administrators.

However, the rejoicing was short-lived because there was no guarantee that the WSHcompliant scripting language you chose would be readily available or a viable option for everyone. The lack of a standard language and environment for writing scripts made it difficult for users and administrators to incorporate automation by using WSH. The only way to be sure the scripting language or WSH version would be compatible on the system being managed was to use a native scripting language, which meant using DOSShell and enduring the problems that accompanied it. In addition, WSH opened a large attack vector for malicious code to run on Windows systems. This vulnerability gave rise to a stream of viruses, worms, and other malicious programs that have wreaked havoc on computer systems, thanks to WSH’s focus on automation without user intervention.

The end result was that systems administrators viewed WSH as both a blessing and a curse. Although WSH presented a good object model and access to a number of automation interfaces, it wasn’t a shell. It required using Wscript.exe and Cscript.exe, scripts had to be written in a compatible scripting language, and its attack vulnerabilities posed a security challenge. Clearly, a different approach was needed for systems management; over time, Microsoft reached the same conclusion.

Introduction to PowerShell

The introduction of WSH as a standard in the Windows operating system offered a robust alternative to DOSShell scripting. Unfortunately, WSH presented a number of challenges, discussed in the preceding section. Furthermore, WSH didn’t offer the CLI shell experience that UNIX and Linux administrators had enjoyed for years, resulting in Windows administrators being made fun of by the other chaps for the lack of a CLI shell and its benefits.

Luckily, Jeffrey Snover (the architect of PowerShell) and others on the PowerShell team realized that Windows needed a strong, secure, and robust CLI shell for systems management. Enter PowerShell. PowerShell was designed as a shell with full access to the underpinnings of Windows via the .NET Framework, Component Object Model (COM) objects, and other methods. It also provided an execution environment that’s familiar, easy, and secure. PowerShell is aptly named, as it puts the power into the Windows shell. For users wanting to automate their Windows systems, the introduction of PowerShell was exciting because it combined “the power of WSH with the warm-fuzzy familiarity of a CLI shell.”

PowerShell provides a powerful native scripting language, so scripts can be ported to all Windows systems without worrying about whether a particular language interpreter is installed. In the past, an administrator might have gone through the rigmarole of scripting a solution with WSH in Perl, Python, VBScript, JScript, or another language, only to find that the next system that they worked on didn’t have that interpreter installed. At home, users can put whatever they want on their systems and maintain them however they see fit, but in a workplace, that option isn’t always viable. PowerShell solves that problem by removing the need for nonnative interpreters. It also solves the problem of wading through websites to find command-line equivalents for simple GUI shell operations and coding them into .cmd files. Last, PowerShell addresses the WSH security problem by providing a platform for secure Windows scripting. It focuses on security features such as script signing, lack of executable extensions, and execution policies (which are restricted by default).

For anyone who needs to automate administration tasks on a Windows system or a Microsoft platform, PowerShell provides a much-needed injection of power. As such, for Windows systems administrators or scripters, becoming a PowerShell expert is highly recommended. After all, PowerShell can now be used to efficiently automate management tasks for Windows, Active Directory, Terminal Services, SQL Server, Exchange Server, Internet Information Services (IIS), and even a number of different third-party products.

As such, PowerShell is the approach Microsoft had been seeking as the automation and management interface for their products. Thus, PowerShell is now the endorsed solution for the management of Windows-based systems and server products. Over time, PowerShell could even possibly replace the current management interfaces, such as cmd.exe, WSH, CLI tools, and so on, while becoming even further integrated into the Windows operating system. The trend toward this direction can be seen with the release of Windows Server 2008 R2 and Windows 7, in which PowerShell is part of the operating system.

PowerShell Uses

In Windows, an administrator can complete a number of tasks using PowerShell. The following list is a sampling of these tasks:

  • Manage the file system — To create, delete, modify, and set permissions for files and folders.
  • Manage services — To list, stop, start, restart, and even modify services.
  • Manage processes — To list (monitor), stop, and start processes.
  • Manage the Registry — To list (monitor), stop, and start processes.
  • Use Windows Management Instrumentation (WMI) — To manage not only Windows, but also other platforms such as IIS and Terminal Services.
  • Use existing Component Object Model (COM) objects — To complete a wide range of automation tasks.
  • Manage a number of Windows roles and features — To add or remove roles and features.

PowerShell Features

PowerShell is a departure from the current management interfaces in Windows. As such, it has been built from the ground up to include a number of features that make CLI and script-based administration easier. Some of PowerShell’s more key features are as follows:

  • It has 240 built-in command-line tools (referred to as cmdlets).
  • The scripting language is designed to be readable and easy to use.
  • PowerShell supports existing scripts, command-line tools, and automation interfaces, such as WMI, ADSI, .NET Framework, ActiveX Data Objects (ADO), and so on.
  • It follows a strict naming convention for commands based on a verb-noun format.
  • It supports a number of different Windows operating systems: Windows XP SP2 or later, Windows Server 2003 SP1 or later, Windows Vista, Windows Server 2008, and now Windows Server 2008 R2 and Windows 7.
  • It provides direct “access to and navigation of” the Windows Registry, certificate store, and file system using a common set of commands.
  • PowerShell is object based, which allows data (objects) to be piped between commands.
  • It is extensible, which allows third parties (as noted earlier) to build upon and extend PowerShell’s already rich interfaces for managing Windows and other Microsoft platforms.

PowerShell 2.0 Enhancements

Windows Server 2008 R2 has the Windows PowerShell 2.0 version built in to the operating system. In this version of PowerShell, a number of enhancements have been made to both PowerShell itself and the ability for managing Windows Server 2008 R2’s roles and features. The following is a summary for some of the improvements in PowerShell 2.0 (these features are talked about in greater detail later in this chapter and throughout this book):

  • The number of built-in cmdlets has nearly doubled from 130 to 240.
  • PowerShell 2.0 now includes the ability to manage a number of roles and features such as the Active Directory Domain Services, Active Directory Rights Management Services, AppLocker, Background Intelligent Transfer Service [BITS], Best Practices Analyzer, Failover Clustering [WSFC], Group Policy, Internet Information Services [IIS], Network Load Balancing [NLB], Remote Desktop Services [RDS], Server Manager, Server Migration, and Windows Diagnostics roles and features.
  • PowerShell 2.0 also includes the introduction of the Windows PowerShell debugger. Using this feature, an administrator can identify errors or inefficiencies in scripts, functions, commands, and expressions while they are being executed through a set of debugging cmdlets or the Integrated Scripting Environment (ISE).
  • The PowerShell Integrated Scripting Environment (ISE) is a multi-tabbed GUI-based PowerShell development interface. Using the ISE, an administrator can write, test, and debug scripts. The ISE includes such features as multiline editing, tab completion, syntax coloring, selective execution, context-sensitive help, and support for right-to-left languages.
  • Background jobs enable administrators to execute commands and scripts asynchronously.
  • Also through the inclusion of script functions, administrators can now create their own cmdlets without having to write and compile the cmdlet using a managed-code language like C#.
  • PowerShell 2.0 also includes a new powerful feature, called modules, which allows packages of cmdlets, providers, functions, variables, and aliases to be bundled and then easily shared with others.
  • The lack of remote command support has also been addressed in PowerShell 2.0 with the introduction of remoting. This feature enables an administrator to automate the management of many remote systems through a single PowerShell console.

However, with all of these features, the most important advancement that is found in PowerShell 2.0 is the focus on what is called the Universal Code Execution model. The core concept in this model is flexibility over how expressions, commands, and scriptblocks are executed across one or more machines.

Understanding the PowerShell Basics

To begin working with PowerShell, some of the basics like accessing PowerShell, working from the command-line interface, and understanding the basic commands are covered in this section of the book.

Accessing PowerShell

After logging in to your Windows interactive session, there are several methods to access and use PowerShell. The first method is from the Start menu, as shown in the following steps:

  1. Click Start, All Programs, Accessories, Windows PowerShell.
  2. Choose either Windows PowerShell (x86) or Windows PowerShell.

To use the second method, follow these steps:

  1. Click Start.
  2. Type PowerShell in the Search Programs and Files text box and press Enter.

Both these methods open the PowerShell console, whereas the third method launches PowerShell from a cmd command prompt:

  1. Click Start, Run.
  2. Type cmd and click OK to open a cmd command prompt.
  3. At the command prompt, type powershell and press Enter.

Command-Line Interface (CLI)

The syntax for using PowerShell from the CLI is similar to the syntax for other CLI shells. The fundamental component of a PowerShell command is, of course, the name of the command to be executed. In addition, the command can be made more specific by using parameters and arguments for parameters. Therefore, a PowerShell command can have the following formats:

  • [command name]
  • [command name] -[parameter]
  • [command name] -[parameter] -[parameter] [argument1]
  • [command name] -[parameter] -[parameter] [argument1],[argument2]

When using PowerShell, a parameter is a variable that can be accepted by a command, script, or function. An argument is a value assigned to a parameter. Although these terms are often used interchangeably, remembering these definitions is helpful when discussing their use in PowerShell.

Navigating the CLI

As with all CLI-based shells, an understanding is needed in how to effectively navigate and use the PowerShell CLI. Table 21.1 lists the editing operations associated with various keys when using the PowerShell console.

TABLE 21.1 PowerShell Console Editing Features

Keys Editing Operation
Left and right arrows Move the cursor left and right through the current command line.
Up and down arrows Moves up and down through the list of recently typed commands.
Left and right arrows Move the cursor left and right through the current command line.
PgUp Displays the first command in the command history.
PgDn Displays the last command in the command history.
Home Moves the cursor to the beginning of the command line.
End Moves the cursor to the end of the command line.
Insert Switches between insert and overstrike text-entry modes.
Delete Deletes the character at the current cursor position.
Backspace Deletes the character immediately preceding the current cursor position.
F3 Displays the previous command.
F4 Deletes up to the specified number of characters from the current cursor.
F5 Moves backward through the command history.
F7 Displays a list of recently typed commands in a pop-up window in the command shell. Use the up and down arrows to select a previously typed command, and then press Enter to execute the selected command.
F8 Moves backward through the command history with commands that match the text that has been entered at the command prompt.
F9 Prompts for a command number and executes the specified command from the command history (command numbers refer to the F7 command list).
Tab Auto-completes command-line sequences. Use the Shift+Tab sequence to move backward through a list of potential matches.

Luckily, most of the features in Table 21.1 are native to the cmd command prompt, which makes PowerShell adoption easier for administrators already familiar with the Windows command line. The only major difference is that the Tab key auto-completion is enhanced in PowerShell beyond what’s available with the cmd command prompt.

As with the cmd command prompt, PowerShell performs auto-completion for file and directory names. So, if you enter a partial file or directory name and press Tab, PowerShell returns the first matching file or directory name in the current directory. Pressing Tab again returns a second possible match and enables you to cycle through the list of results. Like the cmd command prompt, PowerShell’s Tab key auto-completion can also autocomplete with wildcards. The difference between Tab key auto-completion in cmd and PowerShell is that PowerShell can auto-complete commands. For example, you can enter a partial command name and press the Tab key, and PowerShell steps through a list of possible command matches.

PowerShell can also auto-complete parameter names associated with a particular command. Simply enter a command and partial parameter name and press the Tab key, and PowerShell cycles through the parameters for the command that has been specified. This method also works for variables associated with a command. In addition, PowerShell performs auto-completion for methods and properties of variables and objects.

Command Types

When a command is executed in PowerShell, the command interpreter looks at the command name to figure out what task to perform. This process includes determining the type of command and how to process that command. There are four types of PowerShell commands: cmdlets, shell function commands, script commands, and native commands.


The first command type is a cmdlet (pronounced “command-let”), which is similar to the built-in commands in other CLI-based shells. The difference is that cmdlets are implemented by using .NET classes compiled into a dynamic link library (DLL) and loaded into PowerShell at runtime. This difference means there’s no fixed class of built-in cmdlets; anyone can use the PowerShell Software Developers Kit (SDK) to write a custom cmdlet, thus extending PowerShell’s functionality.

A cmdlet is always named as a verb and noun pair separated by a “-” (hyphen). The verb specifies the action the cmdlet performs, and the noun specifies the object being operated on. An example of a cmdlet being executed is shown as follows:

PS C:> Get-Process

Handles NPM(K) PM(K) WS(K) VM (M) CPU(s) Id ProcessName<.tt>
425 5 1608 1736 90 3.09 428 csrss
79 4 1292 540 86 1.00 468 csrss
193 4 2540 6528 94 2.16 2316 csrss
66 3 1128 3736 34 0.06 3192 dwm
412 11 13636 20832 125 3.52 1408 explorer


While executing cmdlets in PowerShell, you should take a couple of considerations into account. Overall, PowerShell was created such that it is both forgiving and easy when it comes to syntax. In addition, PowerShell also always attempts to fill in the blanks for a user. Examples of this are illustrated in the following items:

  • Cmdlets are always structured in a nonplural verb-noun format.
  • Parameters and arguments are positional: Get-Process winword.
  • Many arguments can use wildcards: Get-Process w*.
  • Partial parameter names are also allowed: Get-Process --P w*.

When executed, a cmdlet only processes a single record at a time.


The next type of command is a function. These commands provide a way to assign a name to a list of commands. Functions are similar to subroutines and procedures in other programming languages. The main difference between a script and a function is that a new instance of the shell is started for each shell script, and functions run in the current instance of the same shell.

Functions defined at the command line remain in effect only during the current PowerShell session. They are also local in scope and don't apply to new PowerShell sessions.

Although a function defined at the command line is a useful way to create a series of commands dynamically in the PowerShell environment, these functions reside only in memory and are erased when PowerShell is closed and restarted. Therefore, although creating complex functions dynamically is possible, writing these functions as script commands might be more practical. An example of a shell function command is as follows:

PS C: > function showFiles {Get-ChildItem}
PS C: > showfiles

Directory: Microsoft.PowerShell.CoreFileSystem::C:

Mode LastWriteTime Length Name
d---- 9/4/2007 10:36 PM inetpub
d---- 4/17/2007 11:02 PM PerfLogs
d-r-- 9/5/2007 12:19 AM Program Files
d-r-- 9/5/2007 11:01 PM Users
d---- 9/14/2007 11:42 PM Windows
-a--- 3/26/2007 8:43 PM 24 autoexec.bat
-ar-s 8/13/2007 11:57 PM 8192 BOOTSECT.BAK
-a--- 3/26/2007 8:43 PM 10 config.sys

Advanced Functions

Advanced functions are a new feature that was introduced in PowerShell v2.0. The basic premise behind advanced functions is to enable administrators and developers access to the same type of functionality as a compiled cmdlet, but directly through the PowerShell scripting language. An example of an advanced function is as follows:

function SuperFunction { 
                   Superduper Advanced Function.
                   This is my Superduper Advanced Function.
       PARAMETER Message
                   Message to write.
                  [Parameter(Position=0, Mandatory=$True, ValueFromPipeline=$True)]
                                        [String] $Message
       Write-Host $Message

In the previous example, you will see that one of the major identifying aspects of an advanced function is the use of the CmdletBinding attribute. Usage of this attribute in an advanced function allows PowerShell to bind the parameters in the same manner that it binds parameters in a compiled cmdlet. For the SuperFunction example, CmdletBinding is used to define the $Message parameter with position 0, as mandatory, and is able to accept values from the pipeline. For example, the following shows the SuperFunction being executed, which then prompts for a message string. That message string is then written to the console:

PS C:Userstyson> SuperFunction

cmdlet SuperFunction at command pipeline position 1
Supply values for the following parameters:
Message: yo!

Finally, advanced functions can also use all of the methods and properties of the PSCmdlet class, for example:

  • Usage of all the input processing methods (Begin, Process, and End)
  • Usage of the ShouldProcess and ShouldContinue methods, which can be used to get user feedback before performing an action
  • Usage of the ThrowTerminatingError method, which can be used to generate error records
  • Usage of a various number of Write methods


Scripts, the third command type, are PowerShell commands stored in a .ps1 file. The main difference from functions is that scripts are stored on disk and can be accessed any time, unlike functions that don't persist across PowerShell sessions.

Scripts can be run in a PowerShell session or at the cmd command prompt. To run a script in a PowerShell session, type the script name without the extension. The script name can be followed by any parameters. The shell then executes the first .ps1 file matching the typed name in any of the paths located in the PowerShell $ENV:PATH variable.

To run a PowerShell script from a cmd command prompt, first use the CD command to change to the directory where the script is located. Then run the PowerShell executable with the command parameter and specifying which script to be run, as shown here:

C:Scripts>powershell -command .myscript.ps1

If you don't want to change to the script's directory with the cd command, you can also run it by using an absolute path, as shown in this example:

C:>powershell -command C:Scriptsmyscript.ps1

An important detail about scripts in PowerShell concerns their default security restrictions. By default, scripts are not enabled to run as a method of protection against malicious scripts. You can control this policy with the Set-ExecutionPolicy cmdlet, which is explained later in this chapter.

Native Commands

The last type of command, a native command, consists of external programs that the operating system can run. Because a new process must be created to run native commands, they are less efficient than other types of PowerShell commands. Native commands also have their own parameters for processing commands, which are usually different from PowerShell parameters.

.NET Framework Integration

Most shells operate in a text-based environment, which means you typically have to manipulate the output for automation purposes. For example, if you need to pipe data from one command to the next, the output from the first command usually must be reformatted to meet the second command's requirements. Although this method has worked for years, dealing with text-based data can be difficult and frustrating.

Often, a lot of work is necessary to transform text data into a usable format. Microsoft has set out to change the standard with PowerShell, however. Instead of transporting data as plain text, PowerShell retrieves data in the form of .NET Framework objects, which makes it possible for commands (or cmdlets) to access object properties and methods directly. This change has simplified shell use. Instead of modifying text data, you can just refer to the required data by name. Similarly, instead of writing code to transform data into a usable format, you can simply refer to objects and manipulate them as needed.


Reflection is a feature in the .NET Framework that enables developers to examine objects and retrieve their supported methods, properties, fields, and so on. Because PowerShell is built on the .NET Framework, it provides this feature, too, with the Get-Member cmdlet. This cmdlet analyzes an object or collection of objects you pass to it via the pipeline. For example, the following command analyzes the objects returned from the Get-Process cmdlet and displays their associated properties and methods:

PS C:> get-process | get-member

Developers often refer to this process as "interrogating" an object. This method of accessing
and retrieving information about an object can be very useful in understanding its methods
and properties without referring to MSDN documentation or searching the Internet.

Extended Type System (ETS)

You might think that scripting in PowerShell is typeless because you rarely need to specify the type for a variable. PowerShell is actually type driven, however, because it interfaces with different types of objects from the less-than-perfect .NET to Windows Management Instrumentation (WMI), Component Object Model (COM), ActiveX Data Objects (ADO), Active Directory Service Interfaces (ADSI), Extensible Markup Language (XML), and even custom objects. However, you don't need to be concerned about object types because PowerShell adapts to different object types and displays its interpretation of an object for you.

In a sense, PowerShell tries to provide a common abstraction layer that makes all object interaction consistent, despite the type. This abstraction layer is called the PSObject, a common object used for all object access in PowerShell. It can encapsulate any base object (.NET, custom, and so on), any instance members, and implicit or explicit access to adapted and type-based extended members, depending on the type of base object.

Furthermore, it can state its type and add members dynamically. To do this, PowerShell uses the Extended Type System (ETS), which provides an interface that allows PowerShell cmdlet and script developers to manipulate and change objects as needed.

When you use the Get-Member cmdlet, the information returned is from PSObject. Sometimes PSObject blocks members, methods, and properties from the original object. If you want to view the blocked information, use the BaseObject property with the PSBase standard name. For example, you could use the $Procs.PSBase | getmember command to view blocked information for the $Procs object collection.

Needless to say, this topic is fairly advanced, as PSBase is hidden from view. The only time you should need to use it is when the PSObject doesn't interpret an object correctly or you're digging around for hidden jewels in PowerShell.

Static Classes and Methods

Certain .NET Framework classes cannot be used to create new objects. For example, if you try to create aSystem.Math typed object using the New-Object cmdlet, the following error occurs:

PS C: > New-Object System.Math
New-Object : Constructor not found. Cannot find an appropriate constructor for type
At line:1 char:11
+ New-Object < < < < System.Math

    + CategoryInfo : ObjectNotFound: (:)[New-Object ], PSArgumentException
    + FullyQualifiedErrorId : CannotFindAppropriateCtor,Microsoft.PowerShell.

PS C: >

The reason this occurs is because static members are shared across all instances of a class and don't require a typed object to be created before being used. Instead, static members are accessed simply by referring to the class name as if it were the name of the object followed by the static operator (::), as follows:

PS > [System.DirectoryServices.ActiveDirectory.Forest ]::GetCurrentForest()

In the previous example, the DirectoryServices.ActiveDirectory.Forest class is used to retrieve information about the current forest. To complete this task, the class name is enclosed within the two square brackets ([…]). Then, the GetCurrentForest method is invoked by using the static operator (::).

To retrieve a list of static members for a class, use the Get-Member cmdlet: Get- Member -inputObject ([System.String ]) -Static.

Type Accelerators

A type accelerator is simply an alias for specifying a .NET type. Without a type accelerator, defining a variable type requires entering a fully qualified class name, as shown here:

PS C: > $User = [System.DirectoryServices.DirectoryEntry ]"LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com"
PS C: > $User

distinguishedname:{CN=Fujio Saitoh,OU=Accounts,OU=Managed
path : LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com

PS C: >

Instead of typing the entire class name, you just use the [ADSI] type accelerator to define the variable type, as in the following example:

PS C: > $User = [ADSI]"LDAP://CN=Fujio Saitoh,OU=Accounts, OU=Managed
PS C: > $User

distinguishedname:{CN=Fujio Saitoh,OU=Accounts,OU=Managed
path : LDAP:
//CN=Fujio Saitoh,OU=Accounts,OU=Managed Objects,DC=companyabc,DC=com

PS C: >

Type accelerators have been included in PowerShell mainly to cut down on the amount of typing to define an object type. However, for some reason, type accelerators aren’t covered in the PowerShell documentation, even though the [WMI], [ADSI], and other common type accelerators are referenced on many web blogs.

Regardless of the lack of documentation, type accelerators are a fairly useful feature of PowerShell. Table 21.2 lists some of the more commonly used type accelerators.

TABLE 21.2 Important Type Accelerators in PowerShell

The Pipeline

In the past, data was transferred from one command to the next by using the pipeline, which makes it possible to string a series of commands together to gather information from a system. However, as mentioned previously, most shells have a major disadvantage: The information gathered from commands is text based. Raw text needs to be parsed (transformed) into a format the next command can understand before being piped.

The point is that although most UNIX and Linux shell commands are powerful, using them can be complicated and frustrating. Because these shells are text based, often commands lack functionality or require using additional commands or tools to perform tasks. To address the differences in text output from shell commands, many utilities and scripting languages have been developed to parse text.

The result of all this parsing is a tree of commands and tools that make working with shells unwieldy and time consuming, which is one reason for the proliferation of management interfaces that rely on GUIs. This trend can be seen among tools Windows administrators use, too; as Microsoft has focused on enhancing the management GUI at the expense of the CLI.

Windows administrators now have access to the same automation capabilities as their UNIX and Linux counterparts. However, PowerShell and its use of objects fill the automation need Windows administrators have had since the days of batch scripting and WSH in a more usable and less parsing-intense manner. To see how the PowerShell pipeline works, take a look at the following PowerShell example:

PS C: > get-process powershell | format-table id –autosize


Name Type
Int System.Int32
Long System.Int64
String System.String
Char System.Char
Byte System.Byte
Double System.Double
Decimal System.Decimal
Float System.Float
Single System.Single
Regex System.Text.RegularExpressions.Regex
Array System.Array
Xml System.Xml.XmlDocument
Scriptblock System.Management.Automation.ScriptBlock
Switch System.Management.Automation.SwitchParameter
Hashtable System.Collections.Hashtable
Type System.Type
Ref System.Management.Automation.PSReference
Psobject System.Management.Automation.PSObject
pscustomobject System.Management.Automation.PSCustomObject
Psmoduleinfo System.Management.Automation.PSModuleInfo
Powershell System.Management.Automation.PowerShell
runspacefactory System.Management.Automation.Runspaces.RunspaceFactory
Runspace System.Management.Automation.Runspaces.Runspace
Ipaddress System.Net.IPAddress
Wmi System.Management.ManagementObject
Wmisearcher System.Management.ManagementObjectSearcher
Wmiclass System.Management.ManagementClass
Adsi System.DirectoryServices.DirectoryEntry
Adsisearcher System.DirectoryServices.DirectorySearcher

PS C: >

All pipelines end with the Out-Default cmdlet. This cmdlet selects a set of properties and their values and then displays those values in a list or table.

Modules and Snap-Ins

One of the main design goals behind PowerShell was to make extending the default functionality in PowerShell and sharing those extensions easy enough that anyone could do it. In PowerShell 1.0, part of this design goal was realized through the use of snap-ins.

PowerShell snap-ins (PSSnapins) are dynamic-link library (DLL) files that can be used to provide access to additional cmdlets or providers. By default, a number of PSSnapins are loaded into every PowerShell session. These default sets of PSSnapins contain the built-in cmdlets and providers that are used by PowerShell. You can display a list of these cmdlets by entering the command Get-PSSnapin at the PowerShell command prompt, as follows:

PS C: > get-pssnapin

Name : Microsoft.PowerShell.Core
PSVersion : 2.0
Description : This Windows PowerShell snap-in contains Windows PowerShell management cmdlets used to manage components of Windows PowerShell.

Name : Microsoft.PowerShell.Host
PSVersion : 2.0
Description : This Windows PowerShell snap-in contains cmdlets used by the Windows
PowerShell host.

PS C: >

In theory, PowerShell snap-ins were a great way to share and reuse a set of cmdlets and providers. However, snap-ins by definition must be written and then compiled, which often placed snap-in creation out of reach for many IT professionals. Additionally, snapins can conflict, which meant that attempting to run a set of snap-ins within the same PowerShell session might not always be feasible.

That is why in PowerShell 2.0, the product team decided to introduce a new feature, called modules, which are designed to make extending PowerShell and sharing those extensions significantly easier. In its simplest form, a module is just a collection of items that can be used in a PowerShell session. These items can be cmdlets, providers, functions, aliases, utilities, and so on. The intent with modules, however, was to allow “anyone” (developers and administrators) to take and bundle together a collection of items. These items can then be executed in a self-contained context, which will not affect the state outside of the module, thus increasing portability when being shared across disparate environments.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

How to use WMI with Windows PowerShell scripts

Posted by Alin D on January 24, 2011

Windows Management Instrumentation (WMI) is one of those tools that can change your proverbial life. But while it’s been around since the early 90s, the adoption of WMI has been slowed due to its complicated nature. Windows PowerShell has torn down this wall by making WMI much easier to use in a way that’s changing the face of IT.

Before we dive into exactly how PowerShell simplifies WMI, let’s take a look at what exactly WMI is. In the simplest terms, you can think of WMI as a library filled with resources that provide all sorts of data in a consistent and reliable format.

Wikipedia explains the purpose of WMI as a way to “define a non-proprietary set of environment-independent specifications [that] allow management information to be shared between management applications.” That’s a pretty abstract explanation, and while WMI may have started as an attempt for “environment-independence”, things have now changed, starting with what’s

considered these days to be WMI. When you hear about WMI today, it’s generally used in the context of Microsoft’s implementation of WMI via build-in providers. That’s what we’ll focus on here.

WMI can be broke down into three basic components:

Provider — grants access to managed objects and provides availability to the WMI APIClasses — a WMI representation of objects with properties and methodsNamespace — a logical grouping of classes

So how can PowerShell help make WMI easier to access?

First, let’s look at the tools PowerShell offers for WMI. There are basically five PowerShell cmdlets that make working with WMI a breeze. I will list all of them here, but I’m really only going to focus on one of them (Get-WMIObject):

Get-WmiObject — returns object(s) based on the namespace and class provided

Invoke-WmiMethod — calls WMI methods (commonly used to execute static methods)

Register-WmiEvent — used to subscribe to WMI events

Remove-WmiObject — deletes an instance of an existing WMI class (to be clear, it doesn’t delete the actual class itself, but the instance of the class you have in memory)

Set-WmiInstance — creates or updates an instance of an existing WMI class (use this one with caution as it actually writes to the WMI repository)

Now let’s tackle the biggest problem with WMI, which is figuring out exactly what’s there and what data it provides. There are several ways to get this information, but let’s start with the built-in option.

You can list providers by doing the following:

$Root = “\.ROOT:__namespace”
$WMIProv = New-Object System.Management.ManagementClass
$WMIProv.GetInstances() | Select Name

(Granted, this is a little more complicated than the rest of the tasks. Fortunately, you shouldn’t have to do this often.)

Here is how you list the classes provided by a specific namespace (default is RootCIM2, which has all the Microsoft Win32 classes):

# On local machine
Get-WmiObject –Namespace RootSecurityCenter –List
# On Remote machine
Get-WmiObject –Namespace RootSecurityCenter –List
–Computer core
# To filter you can use wildcards
–Namespace RootSecurityCenter –List
# To list the classes for HyperV on remote server
–Namespace RootVirtualization –List
–Computer core

(This is the hard way, but you can cheat by using free tools like WMI Explorer or Microsoft’s PowerShell Scriptomatic.)

Now it’s time to get down and dirty with Get-WMIObject, which is by far the most useful of the five cmdlets. With this in your toolbox you are only one line away from almost any piece of data (Microsoft OS-related) you can think of. There are over 600 Win32 classes that expose things like CPU, memory, disk, processes, network, BIOS, USB and more. Excited? Just wait until you see how simple it is.

To get operating system info:
Get-WmiObject–class win32_OperatingSystem

To get computer system info:
Get-WmiObject–class win32_ComputerSystem

To get disk info:
Get-WmiObject–class Win32_LogicalDisk

To get network info:
Get-WmiObject –class Win32_NetworkAdapterConfiguration

Just give it a try – it really is that easy.

Let’s finish up by looking at an example script using WMI to get IP information. The useful script below replaces ipconfig and its usually awful output.

function Get-IP

$true) [string]$ComputerName = $Env:COMPUTERNAME
$NICs = Get-WmiObject
Win32_NetworkAdapterConfiguration -Filter
“IPEnabled=’$True'” -ComputerName $ComputerName
foreach ($Nic in $NICs)
$myobj = @{
Name             = $Nic.Description
MacAddress       = $Nic.MACAddress
IP4              = $Nic.IPAddress | where{$_
IP6              = $Nic.IPAddress | where{$_
-match “::”}
IP4Subnet        = $Nic.IPSubnet | where{$_
DefaultGWY       = $Nic.DefaultIPGateway |
Select -First 1
DNSServer        = $Nic.DNSServerSearchOrder
WINSPrimary      = $Nic.WINSPrimaryServer
WINSSecondary    = $Nic.WINSSecondaryServer
$obj = New-Object PSObject -Property $myobj


You can find more general information about WMI at the Microsoft website, along with a WMI glossary of terms and Win32 class list.

Posted in Powershell | Tagged: , , , , , , | Leave a Comment »

PowerShell Cmdlets for SharePoint

Posted by Alin D on January 19, 2011

The set of cmdlets which come with PowerShell is restricted to generic cmdlets and those intended for managing aspects of the Windows Server OS. If you are unfamiliar with using PowerShell cmdlets please check out PowerShell Cmdlets Tutorial first.

For technologies such as SharePoint, PowerShell uses snap-ins which are .NET Framework assemblies that contain custom PowerShell cmdlets. The SharePoint 2010 snap-in for PowerShell contains over than 500 cmdlets which can be used to perform a wide variety of SharePoint admin tasks. This PowerShell SharePoint snap-in is loaded automatically when the SharePoint 2010 Management Shell is run. If you
start the standard PowerShell console, you will need to manually load the snap to access the SharePoint cmdlets. Two native PowerShell cmdlets can assist with this: the Get-PSSnapin cmdlet retrieves info about all the snap-ins registered in the system, and the Add-PSSnapin cmdlet actually loads the snap-ins into the current PowerShell session.

The below example uses the Get-PSSnapin cmdlet with the switch parameter Registered to return the name of the SharePoint 2010 snap-in:

PS > Get-PSSnapin -Registered
Name : Microsoft.SharePoint.PowerShell
PSVersion : 1.0
Description : Register all administration Cmdlets for Microsoft Share- Point Server

The below example shows how to add the snap-in using the Add-PSSnapin cmdlet:

PS > Add-PSSnapin Microsoft.SharePoint.PowerShell

Once the SharePoint snap-in has been added, you can access all the SharePoint cmdlets. The PowerShell console and the SharePoint 2010 Management Shell differ in how threads are created and subsequently used. The standard PowerShell console runs each pipeline (as demarcated by a press of the “Enter” button), function, or script on its own thread, in contrast the SharePoint 2010 Management Shell runs each line, function, or script on one single thread. When using the SharePoint object model with PowerShell, running code on numerous different threads can result in memory leaks, in contrast, commands which run on the same thread have a lower chance causing leaks. This is because several SharePoint objects are still using unmanaged code.
The threading model which is used is determined by the the ThreadOptions property value of each PowerShell runspace (every PowerShell console window is a runspace). The SharePoint 2010 Management Shell utilizes the ReuseThread option set in the SharePoint.ps1 file which is executed every time the shell is started from the SharePoint 2010 menu group. However, the standard PowerShell console, does not have this option configured by default and therefore uses UseNewThread.
It is normally considered best practice is to set the ThreadOption property to ReuseThread when working with SharePoint using the PowerShell console. The below sample shows how to set the ThreadOption property:

PS > $Host.Runspace.ThreadOptions = "ReuseThread"

To find SharePoint Cmdlets, PowerShell includes a useful cmdlet named Get-Command which returns basic info about cmdlets and other elements of PowerShell commands, such as functions, aliases, filters, scripts, and applications. All nouns of SharePoint cmdlets start with SP. Therefore you can retrieve all SharePoint cmdlets by just using Get-Command’s –Noun parameter followed with SP*:

PS > Get-Command -Noun SP*

The asterisk (*) performs a wildcard match retrieving all cmdlets, aliases, functions, etc where the noun starts with SP. You can find different types of SharePoint cmdlets by specifying for example that you want only cmdlets returned using the CommandType parameter:

PS > (Get-Command -Name *-SP* -CommandType cmdlet).Count

Cmdlet New-SPSite New-SPSite [-Url]  [-Language ] [-
Cmdlet Remove-SPSite Remove-SPSite [-Identity]  [-Delet
Cmdlet Restore-SPSite Restore-SPSite [-Identity]  -Path
Cmdlet Set-SPSite Set-SPSite [-Identity]  [-OwnerAli

The output from the command shows the cmdlets available for working with site collections. Looking at the verbs in these cmdlets, you will notice that they are self-describing. Get is used for getting info, Set is for modifying site collections, etc. You can go even deeper and get info on a specific cmdlet using Get-Command to retrieve information on the Get-SPSite cmdlet for example:

PS > Get-Command Get-SPSite
CommandType Name Definition
----------- ---- ----------
Cmdlet Get-SPSite Get-SPSite [-Limit ] [-WebApplication

That should get you started with SharePoint Cmdlets, in future articles we will dig deeper into using PowerShell for SharePoint.

Posted in Powershell | Tagged: , , , | Leave a Comment »


Get every new post delivered to your Inbox.

Join 682 other followers

%d bloggers like this: