Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

How to manage Windows 2012 Networking with Powershell

Posted by Alin D on December 3, 2012

In previous versions of Windows Server, such tasks usually had to be performed using a combination of GUI tools and various command-line utilities. But with the significantly increased Windows PowerShell capabilities built into Windows Server 2012, you can now perform most network administration tasks from the Windows PowerShell command line or by running Windows PowerShell scripts.This lesson demonstrates how to identify network components that have Windows PowerShell support and how to perform some common network-administration tasks using Windows PowerShell.

 Identifying networking cmdlets

In Windows Server 2012, there are now hundreds of Windows PowerShell cmdlets that can be used to view, configure, and monitor different networking components and services in the platform. The tasks you can perform using these cmdlets range from the common (such as configuring static IP addresses or DHCP reservations for servers) to more specialized (such as configuring quality-of-service parameters) to settings related to virtual environments (such as configuring the Hyper-V extensible switch). There is obviously too much to learn here in a single lesson or even a single book, and some tasks might be performed only occasionally or even not at all by many administrators. So let’s begin with a more practical approach to the problem of administering a Windows Server 2012 networking environment using Windows PowerShell by asking a simple question: How can you find the right cmdlet (if there is a cmdlet) to perform a particular networking task?

Using Get-Command

You could start by using the Get-Command cmdlet to search for all Windows PowerShell cmdlets and functions that have the string “net” in their names.This generates a lot of output, however, as shown here:

PS C:\> Get-Command *net*
CommandType Name ModuleName
----------- ---- ----------
Function Add-NetIPHttpsCertBinding NetworkTransition
Function Add-NetLbfoTeamMember NetLbfo
Function Add-NetLbfoTeamNic NetLbfo
Function Add-NetSwitchTeamMember NetSwitchTeam
Function Copy-NetFirewallRule NetSecurity
Function Copy-NetIPsecMainModeCryptoSet NetSecurity
Function Copy-NetIPsecMainModeRule NetSecurity
Function Copy-NetIPsecPhase1AuthSet NetSecurity
Function Copy-NetIPsecPhase2AuthSet NetSecurity
Function Copy-NetIPsecQuickModeCryptoSet NetSecurity
Function Copy-NetIPsecRule NetSecurity
Function Disable-NetAdapter NetAdapter
Function Disable-NetAdapterBinding NetAdapter
Function Disable-NetAdapterChecksumOffload NetAdapter
Function Disable-NetAdapterEncapsulatedPacketTaskOffload NetAdapter
Function Disable-NetAdapterIPsecOffload NetAdapter

From the preceding output, you can see there are several Windows PowerShell modules that perform network-related actions.To see this more clearly, the following commands take the preceding output, sort it by module name, and remove duplicates:

PS C:\> Get-Command *net* | Sort-Object ModuleName | Format-Table ModuleName `
-HideTableHeaders | Out-String | Out-File c:\data\test.txt
PS C:\> Get-Content C:\data\test.txt | Get-Unique

To investigate the NetTCPIP module further, you can use the –Module parameter of Get-Command to list all cmdlets and functions contained in this module:
PS C:\> Get-Command -Module NetTCPIP | Sort-Object Name | Format-Table Name

Using Show-Command

At this point, you can begin using Get-Help to learn about the syntax of NetTCPIP cmdlets you’re interested in and to see some examples of their usage.Unfortunately for administrators who are not that familiar with Windows PowerShell, the syntax displayed when you use Get-Help with a cmdlet can appear daunting.For example, consider a scenario where you have a web server running Windows Server 2012 and you want to add a second IP address to a network adapter on the server.

You might guess from the output of Show-Command –Module NetTCPIP shown previously that New-NetIPAddress is the cmdlet you use to perform this task, and you would be correct. But to the Windows PowerShell beginner, the syntax from Get-Help New-NetIPAddress might look quite confusing:

Parameter Set: ByInterfaceAlias
New-NetIPAddress -InterfaceAlias [-AddressFamily ] [-AsJob]
[-CimSession <CimSession[]> ] [-DefaultGateway ] [-IPv4Address ]
[-IPv6Address ] [-PassThru] [-PreferredLifetime ]
[-PrefixLength ] [-PrefixOrigin ] [-SkipAsSource ]
[-Store ] [-SuffixOrigin ] [-ThrottleLimit ]
[-Type ] [-ValidLifetime ] [-Confirm] [-WhatIf] [ ]
Parameter Set: ByIfIndexOrIfAlias
New-NetIPAddress [-AddressFamily ] [-AsJob]
[-CimSession <CimSession[]> ] [-DefaultGateway ] [-InterfaceAlias ]
[-InterfaceIndex ] [-IPv4Address ] [-IPv6Address ]
[-PassThru] [-PreferredLifetime ] [-PrefixLength ]
[-PrefixOrigin ] [-SkipAsSource ] [-Store ]
[-SuffixOrigin ] [-ThrottleLimit ] [-Type ]
[-ValidLifetime ] [-Confirm] [-WhatIf] [<CommonParameters>]

Fortunately, the new Show-Command cmdlet in Windows Server 2012 can help make the syntax of Windows PowerShell cmdlets easier to understand and use. Start typing the following command:

Show-Command New-NetIPAddress

When you run the preceding command, the properties page shown in Figure 6-6 opens to show you the different parameters you can use with the New-NetIPAddress cmdlet. Parameters such as InterfaceAlias and IPAddress that are marked with an asterisk are mandatory; those not marked this way are optional.

Clearly, to add a new IP address you first need to know the alias or index of the network interface to which you want to add the address. To find the interfaces on the system, you could use Get-Command *interface* to find all cmdlets that include “interface” in their name. Of the eight cmdlets displayed when you run this command, the cmdlet Get-NetIPAddress is the one you are looking for, and running this cmdlet displays a list of all interfaces on the server:

ifIndex InterfaceAlias AddressFamily NlMtu(Bytes) InterfaceMetric Dhcp
------- -------------- ------------- ------------ --------------- ----
12 Ethernet IPv6 1500 5 Disabled
14 Teredo Tunneling Pseudo... IPv6 1280 50 Disabled
13 isatap.{4B8DC8AE-DE20-4... IPv6 1280 50 Disabled
1 Loopback Pseudo-Interfa. IPv6 4294967295 50 Disabled
12 Ethernet IPv4 1500 5 Disabled
1 Loopback Pseudo-Interfa. IPv4 4294967295 50 Disabled

From the preceding command output, you can see that the interface you are looking for is identified by the alias “Ethernet.” To view the existing TCP/IP configuration of this interface, you can use the –InterfaceAlias with the Get-NetIPAddress cmdlet as follows:

 Get-NetIPAddress -InterfaceAlias Ethernet
IPAddress : fe80::cf8:11a1:2e3:d9bc%12
InterfaceIndex : 12
InterfaceAlias : Ethernet
AddressFamily : IPv6
Type : Unicast
PrefixLength : 64
PrefixOrigin : WellKnown
SuffixOrigin : Link
AddressState : Preferred
ValidLifetime : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource : False
PolicyStore : ActiveStore
IPAddress :
InterfaceIndex : 12
InterfaceAlias : Ethernet
AddressFamily : IPv4
Type : Unicast
PrefixLength : 24
PrefixOrigin : Manual
SuffixOrigin : Manual
AddressState : Preferred
ValidLifetime : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource : False
PolicyStore : ActiveStore

The preceding command output shows that the Ethernet interface currently has as its IPv4 address and Classless Inter-Domain Routing (CIDR) prefix.

Returning to the open properties page displayed by Show-Command New-NetIPAddress, you can add a second IP address to the interface by specifying the parameter values.

If you click Copy in the properties page shown in Figure 6-7, the command is copied to the clipboard. The resulting command looks like this:

New-NetIPAddress -InterfaceAlias Ethernet -IPAddress `

-AddressFamily IPv4 -PrefixLength 24

If you click Run, the command executes. By using –InterfaceAlias with the

Get-NetIPAddress cmdlet again, you can verify that the command accomplished the desired result:

Get-NetIPAddress -InterfaceAlias Ethernet
IPAddress : fe80::cf8:11a1:2e3:d9bc%12
InterfaceIndex : 12
InterfaceAlias : Ethernet
AddressFamily : IPv6
Type : Unicast
PrefixLength : 64
PrefixOrigin : WellKnown
SuffixOrigin : Link
AddressState : Preferred
ValidLifetime : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource : False
PolicyStore : ActiveStore
IPAddress :
InterfaceIndex : 12
InterfaceAlias : Ethernet
AddressFamily : IPv4
Type : Unicast
PrefixLength : 24
PrefixOrigin : Manual
SuffixOrigin : Manual
AddressState : Preferred
ValidLifetime : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource : False
PolicyStore : ActiveStore
IPAddress :
InterfaceIndex : 12
InterfaceAlias : Ethernet
AddressFamily : IPv4
Type : Unicast
PrefixLength : 24
PrefixOrigin : Manual
SuffixOrigin : Manual
AddressState : Preferred
ValidLifetime : Infinite ([TimeSpan]::MaxValue)
PreferredLifetime : Infinite ([TimeSpan]::MaxValue)
SkipAsSource : False
PolicyStore : ActiveStore


The best way to learn how to use Windows PowerShell to administer network settings and services on Windows Server 2012 is to experiment with performing different tasks in a test environment.

Posted in Windows 2012 | Tagged: , | Leave a Comment »

How to create and configure SQL Server 2012 AlwaysOn

Posted by Alin D on November 27, 2012

One of the better-known features in the release of SQL Server 2012 Enterprise Edition is AlwaysOn. This has been designed to meet the ever-increasing need for ‘High Availability’ (HA). AlwaysOn does not use entirely new technologies but makes more effective use of existing technologies that are tried and tested. It aims to provide more granular control to achieve High Availability. Currently, depending on your environment, you could already be using one or more of the following HA components that existed in previous versions of SQL Server:

  • Single Site Windows Server Failover Clustering
  • Multi-Site Windows Server Failover Clustering
  • San level Block Replication
  • Transaction Log Shipping
  • Database Mirroring
  • Transactional Replication
  • Peer-to-Peer Replication

Some of these can take time and resources to implement, and may therefore not be meeting your current requirements. This is where SQL Server 2012 AlwaysOn can help, because it provides the benefits of:

  • Using the WSFC APIs to perform failovers. Shared storage is not required
  • Utilizing database mirroring for the data transfer over TCP/IP
  • providing a combination of Synchronous and Asynchronous mirroring
  • providing a logical grouping of similar databases via Availability Groups
  • Creating up to four readable secondary replicas
  • Allowing backups to be undertaken on a secondary replica
  • Performing DBCC statements against a secondary replica
  • Employing Built-in Compression & Encryption

I’ll need to explain some of these components of AlwaysOn

Windows Server Failover Clustering (WSFC)

Clustering technology has been around for quite some time, starting with Microsoft Clustering Services (MCS) back in NT 4.0 days.. The technology for WSFC is part of the backbone of AlwaysOn. A WSFC cluster is a group of independent servers that work together to increase the availability of applications and services. It does this by monitoring the health of the active node and failing over to a backup node, with automatic transfer of resource ownership, when problems are detected.

Although the WSFC is able to span multiple subnets, a SQL Server which is cluster-aware has not, until now, been able to support a clustered instance of SQL Server across multiple subnets: It has therefore been quite expensive to set up clustering across multiple data centres due to the WSFC requiring shared storage in both data centres as well as the block level SAN replication. This has required a lot of work with your storage vendors to get your setup correct.

AlwaysOn Nodes

The nodes that you will use in your SQL Server 2012 AlwaysOn solution have to be part of a WSFC. The first step we need to undertake in preparing our AlwaysOn nodes is to add the Failover Cluster Feature to each node. I’ll go into detail later on in this article.

AlwaysOn Storage

SQL Server versions prior to SQL Server 2012, being setup as clustered instance on a WSFC require the storage to be presented as shared storage. This requirement leads to the storage being more expensive and a little bit more complicated to configure and administer. With SQL Server 2012 AlwaysOn your solution does not have to utilise shared storage, but can use SAN, DAS, NAS or Local Disk depending on your budget and requirements. I suggest working with your storage providers to come up with the solution you need.

Availability Groups

SQL Server 2012 AlwaysOn allows for the more granular control of your environment with the introduction of AlwaysOn Availability Groups (AAG’s). AAG’s allow you to configure groups of databases that you would like to failover all together when there is a problem with the host server. When configuring your AAG’s you:

  • Configure your AAG on the Primary Replica (Your AAG contains the group of DBs that you wish to group together to failover to your secondary replicas)
  • You will need to configure between one and four secondary replicas, with any combination of Synchronous (Maximum of two) and Asynchronous Mirroring (Your primary replica is available for read and write connectivity, while your secondary replicas can be configured for read-only, read intent or no access)

Maintenance Tasks/ Reporting

AlwaysOnallows you to use the secondary replicas that you would have created when you setup your AAGs to undertake some regular database maintenance tasks to remove some of the performance overheads from your primary production server. Some of the tasks that you could look at undertaking on a secondary replica are:

  • Database Backups
    • Full Backup With Copy_Only
    • Transaction Log Backups
  • DBCC CheckDB
  • Reporting
  • Database Snapshots

Security & Performance

To give you the full benefits of high availability, there will be a lot of movement of data. This brings with it security risks and higher bandwidth demands. To minimise these requirements Transparent Database Encryption (TDE) as well as Backup Compression, are both shipped with the Enterprise Edition,

Implementing AlwaysOn

Now that we have covered off the basics of what an AlwaysOn solution could possibly look like we are ready to start and plan for implementing this solution to meet your ever increasing High-Availability requirements and DR needs.

Building your AlwaysOn Cluster

In this scenario we are going to build a two-node SQL Server 2012 AlwaysOn Cluster. To achieve this, all of the nodes that are going to participate in the SQL Server AlwaysOn Cluster need to have .NET Framework 3.5.1 and the Failover Clustering feature enabled.

Required features for Failover Cluster

Required features for Failover Cluster

Now that we have enabled both of these features we can build our WSFC. From the Control Panel | Administrative Tools | Failover Cluster Manager | Validate a Configuration, we can validate whether our servers are okay to participate in a WSFC.

Validate Failover Cluster

Validate Failover Cluster

Building your Windows Server Failover Cluster

There is no difference between the task of building your WSFC for use with SQL Server 2012 AlwaysOn and your previously built WSFC for SQL Server 2008 R2. If you have never built a WSFC before, you can read more on this here Failover Cluster Step-By-Step Guide. In this article, I am not going to go through the WSFC build, but I need to mention that your WSFC build needs to pass all of the validation rules in order to give you a supported WSFC.

SQL Server 2012 Setup

Now that we have our two nodes in our WSFC, we are ready to start the build process for our SQL Server 2012 AlwaysOn Cluster. We need to make sure that we have our installation media which is available for download from Microsoft SQL Server 2012 Downloads.

On Node1, we start the setup.exe to begin the installation process. We are greeted with the initial screen. You should navigate to the Installation Tab to start the installation, selecting ‘New SQL Server stand-alone installation or add features to an existing installation’.

Stand Alone Installation

Stand-Alone Installation

Enter your product key, click ‘Next’.

Enter Product Key

Enter Product Key

Accept the Terms and Conditions, click ‘Next’.

Ensure you select ‘SQL Server Feature Installation’, click ‘Next’.

SQL Server Feature Installation

SQL Server Feature Installation

Choose the features you need to install, click ‘Next’.

SQL Features

Recomended SQL Server Features

our installation rules will be checked and, as long as there are no issues, you can continue with the installation by clicking ‘Next’.

Enter your SQL Server 2012 Instance Name for the Instance that you are building, click ‘Next’.

Select Instance Name

Type your Instance Name

Normally I would recommend having different service accounts for each of the SQL Services that you are installing. However, in this installation I am just using the default local accounts. You will need to have your Domain service accounts created and set the passwords on this Server Configuration screen in the installation. Once you have set the passwords, make sure you click on the Collation Tab so as to configure your Collation for the Instance, click ‘Next’.

Service Account Detail

Service Account Detail

On the Database Engine Configuration screen there are three tabs that we need to pay attention to. The Server Configuration Tab is where you set your security mode – Either Windows (recommended) or Mixed Mode. Remember to add the current account you are running the installation as, as well as any other individuals or groups that need to be members of the SysAdmins group.

The Data Directories Tab allows you to specify where you want to have your User Databases, TempDB and backup locations to be stored. Traditionally you would have four separate drive locations depending on your storage for Data files, Log Files, TempDB and Backups.

The FileStream Tab allows you to Enable Filestream if this is a required option that you need in your environment.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Remember that these same steps need to be completed on the second node that you are including into your SQL Server 2012 AlwaysOn Cluster.

Configuring SQL Server 2012

Now that we have installed two stand-alone instances of SQL Server 2012 on our two servers in the WSFC we need to undertake some post-installation configuration. This is achieved by using the SQL Server Configuration Manager which is available from Start | All Programs | Microsoft SQL Server 2012 | Configuration Tools.

Because the data transfers by SQL Server 2012 AlwaysOn are done via TCP/IP we need to enable this in the Network Configuration Protocols. By default this will be disabled. Change the value to Enabled and click ‘OK’.

We are now at the main point with configuring our SQL Server 2012 AlwaysOn Cluster. Previously, we were creating a Clustered SQL Server Instance and we had to undertake the Clustered Build Option. You will have noticed that we have installed stand-alone instances of SQL Server on each of the nodes participating in the WSFC. We need to enable AlwaysOn Availability Groups. In the ‘SQL Server Configuration Manager’ select the Instance of SQL Server, right click, Select Properties. On the ‘AlwaysOn High Availability’ Tab tick the ‘Enable AlwaysOn Availability Groups’ check box.

Click ‘OK’. The changes will not take effect until the Instance is restarted. You will need to repeat this step on the second instance we installed. (This will need to be done on every instance in your SQL Server 2012 AlwaysOn Cluster)

 Enable AlwaysOn Availability Groups

Enable AlwaysOn Availability Groups


We are now ready to start configuring our Availability Groups.

Configuring SQL Server 2012 AlwaysOn Availability Groups

Before SQL Server 2012, one of the options available for you to use to build your High Availability (HA) solution was to utilise Database Mirroring. The Database Mirroring technology is very good at what it was created for. However, it has some limitations when it comes to your HA solution. The limitations include:

  • A Single Secondary database
  • Mirrored database is accessible via db snapshot only until failover occurs
  • Lack of support for MSDTC (distributed transactions)
  • Related databases are not able to be grouped together

SQL Server 2012 AAG’s resolve most of these issues giving you more flexibility over your environment and more granular control over your environment to meet your ever growing complex HA requirements.

With implementing SQL Server 2012 AAG’s, which is still utilising the Database Mirroring technology to transfer your data via TCP/IP either synchronously or asynchronously to one or more replicas but giving you the added advantage of being able to access these replicas. It still does not support transactional consistency for those databases participating in a availability group.

Availability Groups

As its name suggests, an Availability Group is a grouping of related databases. When you were setting up Database Mirroring Before SQL Server 2012, you could set up multiple mirrors, but you were only able to set up to mirror a single database at a time. If you have multiple databases that are reliant on each other for the application to work, there is no simple way of ensuring that all of the databases failed over together. Availability Groups now allow you to group appropriate databases together. You can setup, up to 10 AAG’s on a per instance level. Across these 10 Availability Groups you can have up to 100 replica databases participating.

The benefits given by an Availability Group are that it:

Availability Replicas

Availability replicas provide you the ability to setup:

  • A primary replica which allows you to undertake read and write capabilities against those databases that have been configured in the AAG
  • Up to four secondary replicas which allow you to have read-only capabilities against those databases that have been configured in the AAG. Also allows you to setup the ability to perform backups on these secondaries.

Availability Modes

As mentioned above, when configuring your SQL Server 2012 AlwaysOn Availability Groups, there are some considerations that need to be taken into account when determining what type of availability mode you can use.

If you are wanting to use AAGs for a reporting process, you could have your secondary replica located in the same physical data centre and implement synchronous-commit mode to give you a read only near time group of databases to report against without impacting the performance of the primary databases with reporting overheads. You probably would not consider this type of availability mode where there are large distances between data centres.

If you have the requirement for a reporting process, that does not require the data to be near real time, you could consider implementing your secondary replica in a separate data centre that may be more than 30-40 Kilometers away. If this is the case, you would look at implementing asynchronous-commits for your AAG. By implementing an asynchronous-commit method, you would reduce the latency of the transactions on the primary site but it would open you up to the possibility of data loss.

As you can set up several secondary replicas, you are able to setup different availability modes in your environment. Each AAG is configured separately; for example: you may have two synchronous implementations and two asynchronous implementations.

In this example you would have your primary databases in AAG1 residing in DC1. You then set up a secondary replica that is also located in DC1 in a synchronous-commit mode, thereby allowing you to run your reporting requirements without the reporting overhead impacting on your primary database. This also provides for your HA requirements, by having a secondary environment that is transactionally consistent with the ability to failover to in the event of an issue with your primary databases. You could then setup secondary replicas in DC2, DC3 & DC4 in asynchronous-commit mode. These asynchronous secondary replicas allow you to meet your DR requirements by having multiple copies in multiple geographical dispersed locations, with the ability to failover to in the event of an issue on the primary site.

Failing Over

As with Database Mirroring and Windows Server Failover Clustering, AlwaysOn Availability Groups provide the ability to failover between the primary and secondary replicas that you have setup. There are three forms of failover which can be undertaken with AAG’s:

  • Automatic – Supported by Synchronous-Commit Mode – No Data Loss
  • Manual – Supported by Synchronous-Commit Mode – No Data Loss
  • Forced – Supported by Asynchronous-Commit – Possible Data Loss

The Availability Mode that is in use will depend on whether you are implementing High Availability or Disaster Recovery. This affects the failover setup that you are going to implement in your SQL Server 2012 AlwaysOn environment.

Availability Group Listener

In order to take advantage of the various solutions that we have stepped through in this article, we need to set up and allow for the applications to maintain connectivity to the SQL Server Databases after a failover. This is where the AlwaysOn Availability Group Listeners (AAGL’s) come into use.

An Availability Group Listener is a Virtual Server Name that applications connect to. From the applications point of view it does not matter where the Availability Database is active and available for use. The AAGL consists of:

  • Virtual Network Name (VNN)
  • Listener Port
  • One or more Virtual IP Addresses (VIPs)

For your application to connect, you can either set up a connection string for your AAGL or connect directly to your SQL Server Instance. However, a direct connection does not give the failover support which this technology has been built for.

When a failover occurs for an AAG, the connection from the client is terminated. To gain access again, the client needs to reconnect to the AAGL. To achieve this, the application must be designed and built to poll for the AAGL. Depending on the connection that you are utilising:

  • Primary database
  • Secondary read replica

You will need to configure your ‘ApplicationIntent‘ in your AAGL connection string appropriately.

With these points in mind, we are now able to create our first AAG in several ways, which are to

  • Create Availability Group Wizard
  • TSQL
  • Powershell

Expanding the AlwaysOn High Availability tree | right click Availability Groups | New Availability Group Wizard

New AlwaysOn Availability Group Wizard

New AlwaysOn Availability Group Wizard

Name your AAG, click ‘Next’.


Name your AlwaysOn Availability Group

Name your AlwaysOn Availability Group

Select the databases that you need to include in the AAG, click ‘Next’.

Select the databases

Availability Databases

Your primary replica will automatically be available for you to configure. Choose the Availability Mode, Failover strategy and Readable secondary requirements. Click ‘Add Replica’, connecting to your appropriate secondary servers. Ensure that you set your secondary the same as your primary.

Availability Databases

Availability Databases


Selecting the Listener Tab, give your AAGL a name, port number and the appropriate IP Addresses, click ‘Next’.

Every replica needs to have access to a shared location to access the database backups created and used for synchronising the secondary databases. Enter your share path, click ‘Next’.


Initial Data Synchronization

Initial Data Synchronization


Ensure that you receive all green ticks for your validation, click ‘Next’.

Review the summary, click ‘Finish’.

My design has been done in lab. If want to know how to build a lab please visit my friend blog post.

Enjoy configuring your new SQL Server 2012 AlwaysOn environment.

Posted in SQL 2012 | Tagged: , | Leave a Comment »

SCVMM PowerShell cmdlets can improve Hyper-V Live Migrations

Posted by Alin D on November 14, 2012

For small IT shops, PowerShell cmdlets improve on Hyper-V Live Migration functionality. But, with the addition of System Center Virtual Machine Manager and its PowerShell capabilities, those shops can precisely coordinate live migrations between nodes and Cluster Shared Volumes.

Microsoft System Center Virtual Machine Manager (SCVMM) includes a robust graphical user interface (GUI) from which administrators can perform Hyper-V live migrations. Even so, you may feel that SCVMM lacks the granular control needed to administer a complex, Hyper-V virtual infrastructure.

SCVMM PowerShell cmdlets, however, provide increased flexibility when managing Live Migration, far beyond what is possible with the GUI. With the following SCVMM PowerShell cmdlets and scripts, for example, you can live migrate an entire host’s worth of virtual machines (VMs) to a specific node, or transfer VMs based on their Cluster Shared Volumes assignments.

(Note: You must install the SCVMM console on the server or workstation from which you are running the script.)

Migrating all VMs from one node to another

Maintenance Mode is an SCVMM feature found within the graphical console that uses an Intelligent Placement algorithm to distribute all the VMs from the node of your choice to the remaining nodes in the cluster. But what if you want to maintain the same combination of VMs, just on a different node?

For example, in my virtual infrastructure, I have a mix of load balanced applications within a cluster. Placing load-balanced VM workloads on the same cluster node limits the effectiveness of the load balancing, especially in the event that a host fails.

Also, by maintaining the same mix of VMs on the destination node, you already know what the resource load will be. From my experience, even with a full free node in the cluster, SCVMM’s Maintenance Mode does not always move every source node VM to the empty destination node. That’s because the SCVMM Intelligent Placement feature constantly gathers load characteristics from each host to determine the best placement of VMs. As the free node receives additional VMs, its placement score goes down, causing SCVMM to distribute the remaining VMs to other cluster nodes.

From the GUI, there’s only one way to force the live migration of every VM on a node to another, named node: by manually going through the migration wizard for each VM.  But the following script overrides Intelligent Placement and synchronously move the VMs to a specific cluster node, simply by answering a few prompts.

# ——————————————————————————
# Migrate All VMs to Alternate Node in the Same Cluster
# ——————————————————————————

Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager

$VMM = Read-Host “Enter Name of VMM Server”
$SH = Read-Host “Enter Source Host Name”
$DH = Read-Host “Enter Destination Host Name”

Get-VMMServer -computername $VMM
Get-VM | ? {$_.hostname -like “$SH*”} | Move-VM -VMHost “$DH*”

To run the script, follow these steps:

  1. Save the SCVMM PowerShell script above (e.g., MigrateAllVMsOnNode_SCVMM.ps1)
  2. Open Windows PowerShell.
  3. Run the script.
  4. Answer the prompts for SCVMM server, source node name and a destination node within the same cluster.Migrate All VMs On Node
  5. Follow progress of the migration from the command status, Failover Cluster Manager or the SCVMM Jobs page.

Check Status on Console

Migrating VMs on a Cluster Shared Volume between nodes

This script identifies the VMs on a particular Cluster Shared Volume, and live migrates them to a specified destination node. Use this script to keep VMs on a Cluster Shared Volume together on the same host after a live migration.

Hyper-V Cluster

Why would you want to do this? During normal cluster operations, a VM resource can utilize direct I/O from a volume shared between all nodes, and any node in the cluster can own that VM resource. But problems arise if you use Microsoft Data Protection Manager or other backup software based on Hyper-V Volume Shadow Copy (VSS).  During a backup, only the node that owns the VM’s CSV has direct access to disk I/O. VMs that live on other nodes but share the same CSV will have their I/O redirected over the network, causing disk latency and degraded performance.

To avoid this issue, enact the following placement architecture to maintain full disk I/O with no deprecated performance for all VMs on that CSV volume. To reorganize VMs after using Maintenance Mode, use this script to easily live migrate just the VMs on a particular CSV to the desired source node.


# ——————————————————————————
# Live Migrate Virtual Machines On a Particular Volume to a New Host in Same Cluster
# ——————————————————————————

Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager

$VMM = Read-Host “Enter Name of VMM Server”
$SH = Read-Host “Enter Source Host Name”
$DH = Read-Host “Enter Destination Host Name”
$Vol = Read-Host “Enter Volume/CSV Volume to Move VMs to Destination Host”

Get-VMMServer -computername $VMM
Get-VM | ? {$_.hostname -like “$SH*”} | ? {$_.Location -like “*$Vol*”} | Move-VM -VMHost “$DH*”

  1. Save the PowerShell script above (e.g. MigrateAllVMsByVolume_SCVMM.ps1).
  2. Open Windows PowerShell.
  3. Run the script that you saved above.
  4. Answer the prompts for SCVMM server, source node name, destination node and the Cluster Shared Volume of VMs that you want to target.
  5. Follow progress of the migration from the command status, Failover Cluster Manager or the SCVMM Jobs page.

Further experimentations with SCVMM PowerShell cmdlets
The above scripts are just two examples of what you can do with the SCVMM PowerShell cmdlets. Because of the depth of information that SCVMM gathers about each VM, there are more ways to include or exclude VMs for Live Migration:

  • Name: You can live migrate VMs with a certain name attribute (e.g., using the <tt>–like</tt>command option).
  • Memory:  You can target VMs that are above or below a certain memory threshold.
  • Operating system:  You can select virtual machines by operating system, such as Windows, Linux (quick migration only), etc.


With SCVMM PowerShell cmdlets, you can customize your Live Migration experience far beyond what’s possible in the graphical console. System Center Virtual Machine Manager 2012 will add options within its updated cmdlet, but the GUI will still lack granular management functionality. As such,  it is in your best interest to learn how to streamline administrative tasks with SCVMM PowerShell cmdlets.

Posted in System Center | Tagged: , , | Leave a Comment »

Active Directory Recycle Bin

Posted by Alin D on November 11, 2012

When your Active Directory forest is operating in the Windows Server 2008 R2 or higher mode, you can use the Active Directory Recycle Bin. The Active Directory Recycle Bin adds an easy-to-use recovery feature for Active Directory objects. When you enable this feature, all link-valued and non-link-valued attributes of a deleted object are preserved, allowing you to restore the object to the same state it was in before it was deleted. You can also recover objects from the recycle bin without hav- ing to initiate an authoritative restore. This differs substantially from the previously available technique, which used an authoritative restore to recover deleted objects from the Deleted Objects container. Previously, when you deleted an object, most of its non-link-valued attributes were cleared and all of its link-valued attributes were removed, which meant that although you could recover a deleted object, it was not restored to its previous state.

Preparing Schema for the Recycle Bin

Before you can make the recycle bin available, you must update Active Directory schema with the required recycle bin attributes. You do this by by preparing the forest and domain for the Windows Server 2008 R2 functional level or higher. When you do this, the schema is updated, and then every object in the forest is updated with the recycle bin attributes as well. This process is irreversible once it is started.

After you prepare Active Directory, you need to upgrade all domain control- lers in your Active Directory forest to Windows Server 2008 R2 or higher and then raise the domain and forest functional levels to the Windows Server 2008 R2 level or higher. Optionally, you can update Active Directory schema in your forests and domains for Windows Server 2012 to enable the enhanced recycle bin.

After these operations, you can enable and access the recycle bin. Once Recycle Bin has been enabled, it cannot be disabled. Now when an Active Directory object is deleted, the object is put in a state referred to as logically deleted and moved to the Deleted Objects container. Also, its distinguished name is altered. A deleted object remains in the Deleted Objects container for the period of time set in the deleted object lifetime value, which is 180 days by default.

NOTE: The msDS-deletedObjectLifetime attribute replaces the tombstone- Lifetime attribute. However, when msDS-deletedObjectLifetime is set to $null, the lifetime value comes from the tombstoneLifetime. If the tombstoneLifetime is also set to $null, the default value is 180 days.


Recovering Deleted Objects

If you elect not to use the recycle bin, you can still recover deleted objects from the Deleted Objects container by using an authoritative restore and other techniques I’ll discuss in this section. The procedure has not changed from previous releases
of Windows Server. What has changed, however, is that the objects are restored to their previous state with all link-valued and non-link-valued attributes preserved. To perform an authoritative restore, the domain controller must be in Directory Services Restore Mode.

Rather than using an authoritative restore and taking a domain controller offline, you can recover deleted objects by using the Ldp.exe administration tool or the Ac- tive Directory cmdlets for Windows PowerShell. If you updated the Active Directory schema in your forests and domains for Windows Server 2012, you also can enable the enhanced recycle bin, which allows you to recover deleted objects using Active Directory Administrative Center.

Keep in mind that Active Directory blocks access to an object for a short while after it is deleted. During this time, Active Directory processes the object’s link-value table to maintain referential integrity on the linked attribute’s values. Active Direc- tory then permits access to the deleted object.

Using Ldp.exe for Basic Recovery

You can use Ldp.exe to display the Deleted Objects container and recover a deleted object by following these steps:

1. Type Ldp.exe in the Apps Search box, and then press Enter.

2. On the Options menu, tap or click Controls. In the Controls dialog box, select 
Return Deleted Objects in the Load Predefined list, and then tap or click OK.

3. Bind to the server that hosts the forest root domain by choosing Bind from the Connection menu. Select the Bind type, and then tap or click OK.

4. On the View menu, tap or click Tree. In the Tree View dialog box, use the BaseDN list to select the appropriate forest root domain name, such as DC=windows-scripting,DC=org, and then tap or click OK.

5. In the console tree, double-tap or double-click the root distinguished name and locate the CN=Deleted Objects container.

6. Locate and press and hold or right-click the Active Directory object you want to restore, and then tap or click Modify. This displays the Modify dialog box.

7. In the Edit Entry Attribute text box, type isDeleted. Do not enter anything in the Values text box.

8. Under Operation, tap or click Delete, and then tap or click Enter.

9. In the Edit Entry Attribute text box, type distinguishedName. In Values, 
type the original distinguished name of this Active Directory object.

  1. Under Operation, tap or click Replace. Select the Extended check box, tap or click Enter, and then tap or click Run.

Using Windows PowerShell for Basic and Advanced Recovery

The Active Directory cmdlets for Windows PowerShell allow you to recover deleted objects using scripts or by typing commands at a PowerShell prompt. You use Get- ADObject to retrieve the object or objects you want to restore, pass that object or objects to Restore-ADObject, and then Restore-ADObject restores the object or objects to the directory database.

NOTE The Active Directory module is not imported into Windows PowerShell by de- fault. Import the Active Directory module by typing import-module activedirectory at the PowerShell prompt. For more information, see “Active Directory Administrative Center and Windows PowerShell” in Chapter 7.

To use the Active Directory cmdlets for recovery, you need to open an elevated, administrator PowerShell prompt by pressing and holding or right-clicking the Windows PowerShell entry on the menu and tapping or clicking Run As Administra- tor. The basic syntax for recovering an object is as follows:

Get-ADObject -Filter {ObjectId} -IncludeDeletedObjects | Restore-ADObject

ObjectId is a filter value that identifies the object you want to restore. For ex- ample, you could restore a deleted user account by display name or SAM account name as shown in these examples:

Get-ADObject -Filter {DisplayName -eq "Rich Tuppy"} -IncludeDeletedObjects | Restore-ADObject

Get-ADObject -Filter {SamAccountName -eq “richt”} –IncludeDeletedObjects | Restore-ADObject

Note that nested objects must be recovered from the highest level of the deleted hierarchy to a live parent container. For example, if you accidentally deleted an OU and all its related accounts, you need to restore the OU before you can restore the related accounts.

The basic syntax for restoring container objects such as an OU is as follows:

Get-ADObject -ldapFilter:”(msDS-LastKnownRDN=ContainerID)” –IncludeDeletedObjects | Restore-ADObject

ContainerID is a filter value that identifies the container object you want to restore. For example, you could restore the Corporate Services OU as shown in this example:

Get-ADObject -ldapFilter:”(msDS-LastKnownRDN=Corporate_Services)”

–IncludeDeletedObjects | Restore-ADObject

If the OU contains accounts you also want to restore, you can now restore the ac- counts by using the technique discussed previously, or you can restore all accounts at the same time. The basic syntax requires that you establish a search base and associate the accounts with their last known parent, as shown here:

Get-ADObject -SearchBase “CN=Deleted Objects,ForestRootDN” -Filter {lastKnownParent -eq “ContainerCN,ForestRootDN“} -IncludeDeletedObjects | Restore-ADObject

ForestRootDN is the distinguished name of the forest root domain, such as DC=windows-scripting,DC=org, and ContainerCN is the common name of the container, such as OU=Corporate_Services or CN=Users. The following example restores all the ac- counts that were in the Corporate Services OU when it was deleted:

Get-ADObject -SearchBase “CN=Deleted Objects,DC=Cpandl,DC=com” –Filter

{lastKnownParent -eq “OU=Corporate_Services,DC=windows-scripting,DC=org”}

-IncludeDeletedObjects | Restore-ADObject

Using the Enhanced Recycle Bin for Recovery

The enhanced recycle bin makes recovering deleted objects as easy as pointing and clicking or tapping and holding. Once you updated the Active Directory schema
in your forests and domains for Windows Server 2012, you enable the enhanced recycle bin for use by following these steps:

1. In Active Directory Administrative Center, the local domain is opened for management by default. If you want to work with a different domain, tap or click Manage and then tap or click Add Navigation Nodes. In the Add Navigation Nodes dialog box, select the domain you want to work with and then tap or click OK.

  1. Select the domain you want to work with by tapping or clicking it in the left pane. In the Tasks pane, tap or click Enable Recycle Bin and then tap or click OK in the confirmation dialog box.
  2. Active Directory will begin replicating the change to all domain controllers in the forest. Once the change is replicated, the enhanced recycle bin will be available for use. If you then tap or click Refresh in Active Directory Adminis- trative Center, you’ll see that a Deleted Object container is now available for domains using the enhanced recycle bin.

Keep in mind that the enhanced recycle bin is a forestwide option. When you enable this option in one domain of a forest, Active Directory replicates the change to all domain controllers in all domains of the forest.

With the enhanced recycle bin enabled, you can recover deleted objects with ease. In Active Directory Administrative Center, domains using the enhanced recycle bin will have a Deleted Object container. In this container, you’ll see a list of deleted objects. As discussed previously, deleted objects remain in this container for the deleted object lifetime value, which is 180 days by default.

Each deleted object is listed by name, when it was deleted, the last known par- ent, and the type. When you select a deleted object by tapping or clicking it, you can use the options in the Tasks pane to work with it. The Restore option restores the object to its original container. For example, if the object was deleted from the Users container, it is restored to this container.

The Restore To option restores the object to an alternate container within its original domain or a different domain within the current forest. Specify the alternate container in the Restore To dialog box. For example, if the object was deleted from the Users container in the domain, you could restore it to the Devs OU in the domain.

Posted in TUTORIALS | Tagged: , | Leave a Comment »

BranchCache in windows 2012

Posted by Alin D on November 9, 2012

Windows Server 2012 BranchCache feature provide IT administrators to optimize bandwidth between remote offices. Optimizing WAN links when end users access files on remote sites over WAN, BrachCache technology allows caching of these contents. At remote sites, data is stored on a dedicated BranchCache server as an offline copy. You may also store this cache data across client computers in smaller deployments. When clients try to access the data in the central office, BrachCache transparently redirect clients to the local copy. This provides a lot faster service compared to going over the WAN link.

BranchCache allows IT administrators to:

  • Implement a distributed infrastructure with reliable access to data in central office
  • Support cloud solutions where WAN connection is not running at optimal speed
  • Save cost by reducing the bandwidth used by clients.
  • Implement WAN optimization with native Microsoft solutions


BranchCache Operation Modes

Distributed Cache Mode This mode spreads the cache data across clients in the branch office. Ideal for small environment where there’s no infrastructure in the branch office.



  • Allows cost savings
  • Utilize existing storage for cache


  • Optimization doesn’t work across subnets
  • Cache is not always available
Hosted Cache Mode As name indicates, cache data is stored on a dedicated server in the branch office. This is a recommended mode for larger branch offices.



  • Cache data is always available
  • Optimization works across subnets


  • Requires a dedicated server so more costs
  • Needs IT personal for maintenance




BranchCache Supported Applications

Microsoft BranchCache is compatible with following application servers

  • MS IIS Servers w/ BranchCache feature installed
  • MS File Servers w/ BranchCache feature installed
  • Application servers that utilizes BITS, Background Intelligent Transfer Service

BranchCache Improvements in Windows Server 2012

  • BranchCache now supports unlimited number of hosted branchcache sites
  • Better optimization with Windows Server 2012 File servers and web servers
  • Optimization achieved by segmenting files in smaller chunks
  • Automated client configuration for distributed mode with GPO.
  • Cached information on remote sites are not encrypted by default
  • Dedup within BrachCache data, only a single copy of the duplicate content is cached

Posted in Windows 2012 | Tagged: , | Leave a Comment »

Active Directory features in Windows 2012

Posted by Alin D on November 5, 2012

Active Directory Domain Service in Windows Server 2012 has many additional features that give administrators additional options for implementing and managing Active Directory Like in the table listed below. At the least, these features require that you update the Active Directory schema in your forests and domains for Windows Server 2012. You also might need to update the domain, forest, or both functional levels to the new Windows Server 2012 operating level.

Active Directory–based activation Allows you to use AD to automatically activate clients running Windows 8 and Windows Server 2012. Any client connected to the service is activated.


Volume Licensing; Active Directory schema must be updated for Windows Server 2012; key is set using Volume Activation server role or command line.


Claims-based policy controls Allows access and audit policies to be defined flexibly.


Claims policy must be enabled for Default Domain Controllers Policy; file servers must run Windows Server 2012; domain must have at least one Windows Server 2012 domain controller.


Deferred index creation Allows deferring of index creation within the directory until UpdateSchemaNow is received or the domain controller is rebooted.


The domain controller must run Windows Server 2012.


Enhanced Fine-Grained Password Policy Allows administrators
to use Active Directory Administrative Center
for Windows Server 2012 to create and manage password-settings objects (PSOs).


Windows Server 2008 or higher domain functional level.


Enhanced Recycle Bin Allows administrators to recover deleted objects using Active Directory Administrative Center for Windows Server 2012.


Domain must have Recycle Bin enabled and Windows Server 2008 R2 or higher forest functional level.


Group Managed Service Accounts Allows multiple services to share a single managed service account.


Active Directory schema must be updated for Windows Server 2012; must have at least one Windows Server 2012 domain controller; services must run on Windows Server 2012.


Kerberos constrained delegation across domains Allows managed service accounts to act on behalf of users across domains and forests.


Each affected domain must have at least one Windows Server 2012 domain controller; front-end server must run Windows Server 2012; back-end server must run Windows Server 2003 or later; and other requirements as well.


Kerberos with Armoring Improves domain security; allows a domain-joined client and domain controller to communicate over a protected channel.


Windows Server 2012 domain controllers; Windows Server 2012 domain functional
level; on clients, enable “Require FAST” policy;
on domain controllers, enable “Support CBAC and Kerberos Armoring” policy.


Off-premises domain join Allows a computer to be domain-joined over the Internet.


Domain must be Direct Access–enabled, and domain controllers must run Windows Server 2012.


Relative ID (RID) soft ceiling and warnings Adds warnings as global RID space is used up. Adds a soft ceiling of 900 million RIDs used that prevents RIDs from being issued until administrator overrides.


A domain controller with RID role must run Windows Server 2012, and domain controllers must run Windows Server 2012.


Server Manager integration Allows you to perform all the steps required to deploy local and remote domain controllers.


Windows Server 2012; forest functional level of Windows Server 2003 or higher.


Virtual domain controller cloning Allows you to safely deploy virtualized replicas of domain controllers. Also helps maintain domain controller state. A domain controller with PDC Emulator role must run Windows Server 2012, and virtual domain controllers must run Windows Server 2012 as well.  

Working with Domain Structures

Active Directory provides both logical and physical structures for network compo- nents. Logical structures help you organize directory objects and manage network accounts and shared resources. Logical structures include the following:

  • Organizational units A subgroup of domains that often mirrors the orga- nization’s business or functional structure.
  • Domains A group of computers that share a common directory database.
  • Domain trees One or more domains that share a contiguous namespace.
  • Domain forests One or more domain trees that share common directoryinformation.Physical structures serve to facilitate network communication and to set physical boundaries around network resources. Physical structures that help you map the physical network structure include the following:
  • Subnets A network group with a specific IP address range and network mask.
  • Sites One or more subnets. Sites are used to configure directory access and replication.

Using Windows Server 2012 Functional Level

Like Windows Server 2008 R2, Windows Server 2012 runs only on 64-bit hardware and you’ll likely need to install Windows Server 2012 on new hardware rather than on hardware designed for earlier releases of Windows Server. Unlike earlier releases of Windows Server, the domain and forest preparations required for updating Ac- tive Directory schema don’t need to be performed manually. Instead, when you use Server Manager for Windows Server 2012 and the forest functional level is Windows Server 2003 or higher, any necessary preparations are done automatically when you deploy a domain controller running Windows Server 2012. This means the Configu- ration Wizard automatically updates forest and domain schema.
You also have the option of manually preparing for Windows Server 2012. To do this, you can use Adprep.exe to update the forest and the domain schema so that they are compatible with Windows Server 2012 domains. The steps are similar to those discussed in the previous section.
After upgrading all domain controllers to Windows Server 2012, you can raise the domain and forest level functionality to take advantage of the latest Active Directory features. If you do this, you can use only Windows Server 2012 resources in the domain.


Posted in Windows 2012 | Leave a Comment »

Windows To Go explained

Posted by Alin D on October 31, 2012

With its Windows 8 feature called Windows To Go, Microsoft has turned the quaint USB stick into the key to transforming employees’ personal desktops into Windows 8 corporate desktops.

Though Microsoft claims that Windows To Go enables bring-your-own-device (BYOD) programs, it may be quite limited in the types of devices for which it will be useful.

Windows To Go lets IT boot a full, managed corporate Windows 8 image, along with users’ business apps, data and settings, to a Universal Serial Bus (USB) device. End users can then plug that USB stick into their own PCs or laptops to run a corporate Windows 8 desktop.

Using Windows To Go, “IT organizations can support the ‘Bring Your Own PC trend’ and businesses can give contingent staff access to the corporate environment without compromising security,” Microsoft said in a blog post on Tuesday.

But there are still plenty of unanswered questions about the technology. It is still unclear whether Windows To Go will actually work on popular non-Windows devices such as tablets. During a session on Windows To Go at its Build show last year, Microsoft provided a developer preview that could only be booted to x64 systems with a Windows Vista or Windows 7 logo.

This week, Microsoft offered no new information on whether the final version of Windows To Go will work with iPads, Android tablets, netbooks or other non-Windows devices.

Microsoft did say on its TechNet site that Windows To Go will not work on its Windows 8 ARM tablets. In addition, Windows 8 on ARM won’t connect to corporate domains, so enterprises that planned to integrate Windows 8 ARM tablets into their IT environments won’t be able to do so easily. (The Apple iPad doesn’t connect to corporate domains, either.)

“I can understand them not supporting Windows To Go on ARM, as it is a more locked-down version of Windows 8 anyway — so not really set up for enterprise users that are more likely to use Windows To Go,” said Ben Lowe, a consultant for the Tribal Labs blog who tested Windows To Go.

Although non-ARM Windows 8 tablets will also exist, it’s unclear whether Windows To Go will run on them.

Microsoft didn’t mention running Windows To Go on Apple’s Mac, but at least one developer has booted it on a MacBook Air.

While Windows To Go will certainly give companies a way to turn employees’ personal Windows machines into corporate Windows 8 desktops, it may not be much use in BYOD shops.

Lowe said he initially hoped that Windows To Go would provide “work/travel/home device nirvana,” where he could have an ARM-based tablet set up for the family that he could plug his work Windows To Go USB into and use for business trips and meetings.

Lowe said he also imagined plugging the USB into a dumb PC to turn it into a Windows 8 desktop. “Unfortunately, I can’t imagine that flicking between ARM and 64-bit hardware would ever work,” he said.

Still, the Windows To Go concept does hold some appeal.

“I use the law library at a local law school from time to time,” said Michael Cherry, an analyst at Directions on Microsoft, an independent analysis firm in Kirkland, Wash. “Instead of carrying my PC there, I could just take a USB drive with Windows To Go and use one of the computers they have in the library.”

In addition, questions about Windows licensing remain unanswered. Since Windows To Go is a corporate desktop feature, it may require Microsoft Enterprise Licenses and Software Assurance. Licensing will also determine use on non-Microsoft devices. Some analysts also wonder whether companies will have to buy a second copy of Windows.

“The answer to the licensing questions will answer how useful the [Windows To Go] approach is,” Cherry said.

Microsoft would not address questions about Windows To Go licensing this week.

As for management, once the Windows 8 USB drives are in use, IT managers can service the Windows image the same way they would handle a laptop or PC, using Group Policy or software distribution mechanisms. When a user logs on, the policies are pushed down to them, according to Microsoft.

IT can also encrypt Windows To Go by using passwords and BitLocker protection — which would minimize concerns about lost or stolen USB sticks.

Using the Windows To Go preview

Loading a Windows 8 Consumer Preview ISO image onto a USB 2 stick takes over four hours — plus a couple more hours to configure, Lowe said. He added that it would take far less time to load Windows 8 onto a USB 3.0 — which Microsoft recommends using.

Microsoft has not published hardware requirements, but it said during the session at Build that Windows To Go should be used on systems that minimize hub depth to external ports, firmware that supports reliable USB boot and Unified Extensible Firmware Interface (UEFI) firmware that supports USB-class boot entries. It requires at least 32-GB drives.

Each boot on new computer can take 20 minutes to install drivers. After booting from the USB, Lowe said, the user interface is sluggish for roughly five minutes.

“Once it’s all there, then it seems quite snappy but really struggles again when trying to open apps — so much so that some Metro apps just won’t load,” he said. “Windows Update doesn’t seem to work either.”

Posted in Windows 8 | Leave a Comment »

CHKDSK utility and it`s improvements in windows server 2012

Posted by Alin D on October 25, 2012

As many of you may know from firsthand experience, the chkdsk command is a necessary evil Windows uses to ensure file system integrity. Because NTFS is not immune to file system corruption uses the chkdsk tool to fix transient and permanent problems such as bad sectors, lost files, missing headers and corrupt links. The down side of is that chkdsk can take a long time to execute, depending on the number of files on the volume and it requires exclusive access to the disk, which means users could be waiting from few hours to few days to access the data.

Chkdsk has evolved over the years just as disk drives continue to explode in size. Back in the mid-1990s with NT 3.51, a 1 GB disk was considered a large drive. Now, we have terabyte disks, combined with storage controller RAID functionality, which allows us to configure extremely large LUNs. As disks get larger, administrators leverage the capacity for more users per disk, which translates to more user files. Unfortunately, chkdsk does not scale well when analyzing hundreds of millions of files, so administrators are reluctant to use large volumes due to increased potential downtime.

Over the years, improvements have been made to hasten chkdsk’s execution time. Switches have been added to chkdsk to skip extensive index and folder structure checking. Failover clusters can also be configured to skip running chkdsk when a dirty volume is brought online. But these improvements only mask the underlying problem: Scanning a large disk with millions of files takes a very long time. The table below shows approximate chkdsk execution times for major versions of Windows.

Operating System Version 2 Million Files 3 Million Files
NT4 SP6 48 hours 100+ hours
Windows 2000 4 hours 6 hours
Windows 2003 0.4 hour 0.7 hour
  200 Million Files 300 Million Files
Windows 2008 R2 5 hours 6.25 hours

Chkdsk revamped

In Windows Server 2012 and in Windows 8, enterprise-class customers can finally have confidence when deploying multiterabyte volumes. Chkdsk has been redesigned to run in two separate phases: an online phase for scanning the disk for errors and an offline phase for repairing the volume. This was done because the vast majority of time spent executing chkdsk is spent scanning the volume, while the repair phase only takes a few seconds.

Better yet, most of the new chkdsk functionality has been implemented transparently so you won’t even know its running. The analysis phase of chkdsk now runs as a background task. If NTFS suspects a problem in the file system, it attempts to self-heal it online. Errors of a transient nature are fixed on the fly with zero downtime. Any real corruption is flagged and logged for corrective action when it is convenient. In the meantime, the volume remains online to provide immediate access to your data.

Once every minute, the health of all physical disks is checked, and any problems are reported to event logs and management consoles, including the Action Center and the Server Manager. The corrective action usually involves remounting the drive, which takes just a few seconds. The amount of downtime for repairing corrupt volumes is now based on the number of errors to be fixed, not the size of the volume or the number of files.

Windows Failover Clusters using cluster shared volumes (CSVs) also benefit from the integrated chkdsk design by transparently fixing errors on the fly. Whenever any corruption errors are detected, I/O is transparently paused while fixes are made to repair the volume and then automatically resumed. This added resiliency makes CSVs continuously available to users with zero offline time.

The command line interface (CLI) chkdsk command is still available for fixing severely corrupt volumes. In fact, several new options have been added to support the new design, including /scan, /forceofflinefix, /spotfix and /offlinescanandfix. There is also a new cmdlet called repair-volume to offer the same chkdsk functionality with PowerShell. A brief description of the new PowerShell options is provided below.

Option Description
Repair-volume PowerShell cmdlet that performs repairs on a volume
OfflineScanAndFix Takes the volume offline to scan and fix any errors. Equivalent to chkdsk /f.
Scan Scans the volume without attempting to repair it. All detected corruption is added to the $corrupt system file. Equivalent to chkdsk /scan.
SpotFix Takes the volume offline briefly and then fixes only the issues that are logged in the $corrupt file. Equivalent to chkdsk /spotfix.

Source: Microsoft TechNet

For example, if you suspect severe corruption with a particular volume, you can manually repair the drive by first scanning it to record any errors in the $corrupt system file. Then, when it is convenient to take the drive offline briefly, use the –SpotFix option to fix the errors:

PS C:\> repair-volume –DriveLetter T –Scan

PS C:\> repair-volume –DriveLetter T -SpotFix

For more information on the repair-volume cmdlet, use the command get-help repair-volume –full.

Windows Server 2012 has many improvements to increase the availability of your data. Now you can have very large disks with hundreds of millions of files and not have to worry about chkdsk slowing your boot time. While most of the new chkdsk functionality is implemented transparently, the CLI chkdsk tool and the new repair-volume PowerShell cmdlet provide administrators with the ability to fix volumes manually.


Posted in Windows 2012 | Tagged: , , , | Leave a Comment »

Windows server 2012 – Basic analyze – Part 4

Posted by Alin D on October 20, 2012

Dedup and Windows Server 2012


Dedup, also known as data deduplication is a feature that allows space reduction on a data volume by removing duplicate copy of data. This is also referred to single instance storage. If multiple users stores same copy of the file in within the same data volume, data deduplication simply replace unneeded copies with a shortcut to a single file. A lot of big players in the storage market such as Netapp and EMC have data deduplication feature builtin to their filers.

Windows Server 2012 now supports dedup. This feature on Windows server 8 is design to untilize these high end storage features without the need for expensive storage appliances. By default Windows Server 2012 do not move the duplicate copies until defined age has passed. Default age is 30 days. Microsoft also designed this feature so it doesn’t take up too much resources during the deduplication period. If a server is low of memory or CPU resources, dedup feature will simply back off and wait until resources are reclaimed. Noticeable feature is that Windows Server 2012 allows for deduplication of VHD files addition to typical binary files. However Microsoft does not recommend dedup on databases such as .edb, .mdf and .ldf files. This feature help IT admins reduce storage costs if it’s applied to the right data such as File shares such as home folders. Microsoft claims that this feature can reduce 30% to 50% of data on file shares and staggering reduction of 80% to 95% for VHD files. Way to go Microsoft!

Below is the recommendation of dedup feature based on data type

Recommend Deduplication File Servers, VHD Files, Software Repositories, Backups and other static data.
Maybe Application Servers, FTP Servers, Web servers
Not Recommended Virtualization Hosts, WSUS, Database servers or any data that changes very frequently.



  • Windows Server 2012 Operating system
  • At least 4GB of RAM
  • 1 CPU core and 350MB of RAM for every 1.5TB worth of data
  • Must be on non-system volume such as boot volume
  • Mapped drives via net use is not supported. Must be a local volume.
  • Must be using NTFS with MBR or GPT partition
  • Not supported on ReFS file system

To install the deduplication feature, use the Server Manager – Server Manager > Server Roles > File and storage services > File services > Data Deduplication.


File and storage services – File services Data Deduplication

Follow below steps to deploy the “Data Deduplication” feature using powershell


  1. Run below powershell commands to install the featur
Import-Module ServerManager

Add-WindowsFeature -name FS-Data-Deduplication
Import-Module Deduplication


  1. To turn on deduplication feature, use below command (where F is the volume)
Enable-DedupVolume F:
  1. To Set the minimum file age before deduplication
Set-Dedupvolume F: -MinimumFileAgeDays 30
  1. To get a list of deduped volumes, run
  1. To get dedup status, run
  1. To start a dedup job manually, run
Start-DedupJob F: -Type Optimization
  1. To get current dedup schedule, run


How to calculate dedup rate

Installing the “Data Deduplication” feature will automatically install the DDPEVAL.exe. This is a tool provided by Microsoft to calculate saving once dedup feature is turned on. This tool will allow you determine if deduplication is effective your data type.

Data Deduplication EVAL


Garbage Collection and Data Scrubbing

Garbage collection process to clear data chunks that are no longer referenced. This process deleted content to free up additional space. Data scrubbing checks integratiy and validate the checksum data. These jobs can be ran manually by running Start-DedupJob –Type Scrubbing and –Type GarbageCollection.


Backup and Dedup volumes

Since dedup reduce the size of the volume, dedup aware backups will run a lot faster due to reduced data size. Backup applications that are block based will continue to work without any change. However, file based backups such as xcopy or copy-item will copy the data in its original form (without dedup). As of now, Windows Server 2012 backup feature supports dedup volumes. I assume 3rd party backup solutions such as BackupExec and Netbackup will catch on and support dedup volumes in optimized form.

New RMS Features in Windows Server 2012


Windows Server 2012 RMS server allows end users to securely share data while taking advantage of digital certificates. This article will describe minor changes for RMS in Windows Server 2012 compared to previous versions.

SQL Server requirements

To provide better support for remote RMS deployments, Microsoft made some changes to SQL Server requirements. In older version of RMS, the setup process requires that installation account must have local admin right on the SQL Server. This was no longer the case with Windows Server 2012 RMS installation. Windows Server 2012 AD RMS installer only requires that installation account have the “sysadmin” permission on the SQL Server. Now, Windows Server 2012 RMS  also supports SQL Server 2005 SP3, SQL Server 2008 SP3 and SQL Server 2008 R2 SP2.

PowerShell Deployment

Windows Server 2012 now allows Installation of RMS role via Powershell. Here are a few example commands:

  • Add-WindowsFeature ADRMS –includeallsubfeature –includeallmanagementtools
    This command installs all RMS role services, sub features and management tools.
  • Add-WindowsFeature ADRMS-Server
    This installs only the RMS role services.
  • Add-WindowsFeature ADRMS-Identity
    This installs the RMS identity federation support

Installation of RMS OLE via Powershell


Posted in Windows 2012 | Leave a Comment »

How to use Powershell to prepare Active Directory for Exchange 2010 migration

Posted by Alin D on October 17, 2012

Before performing an Exchange Server 2010 migration, you have to make sure Active Directory meets certain prerequisites. Thankfully, there are a number of PowerShell cmdlets to help you prepare your Active Directory forest for the move.

Validating your Active Directory forest
Before deploying Exchange Server 2010, your Active Directory forest must meet several different conditions:

  • There must be at least one global catalog server running Windows Server 2003 SP1 or higher. This server must also exist in the same site that the Exchange server gets installed on.
  • The Active Directory forest must run forest-functional level of Windows Server 2003 or higher.
  • The domain you’re going to install Exchange in must be Windows Server 2003 or higher.
  • The server with the schema master role must run Windows Server 2003 SP1 or higher.

The easiest way to check if Active Directory meets these prerequisites is to open a PowerShell 2.0 session — not an Exchange Management Shell (EMS) session — and enter the following command:

Get-ADForest | Format-List Name, GlobalCatalogs, ForestMode, Domains, SchemaMaster, Sites

After executing this command, you will receive the forest name, the names of the global catalog servers within the forest, the forest-functional level, the names of the domains within the forest, the schema master name and the names of the Active Directory sites. All of this information is important when preparing your Active Directory forest for an Exchange 2010 migration.

Checking the forest-functional level

As you can see in image above, my Active Directory forest is already running at the Windows 2003 forest-functional level. If necessary, you can upgrade the forest-functional level using the following command:

Get-ADForest | Set-ADForestMode –ForestMode Windows2003Forest –Confirm:$True

Checking server versions
Both the Active Directory schema master and at least one global catalog server in each site must run Windows Server 2003 SP1 or higher. The Get-ADForest command revealed the identities of your global catalog servers and the schema master. However, you still need to determine which version of Windows those servers are running.

Enter the command below, but make sure to substitute your server’s NetBIOS name for <servername> and add a dollar sign ($) to the end of your server’s name. Otherwise, the command won’t work.

Get-ADComputer –Filter {SamAccountName –EQ “<server name>$”} –Properties OperatingSystem, OperatingSystemServicePack | Format-List Name, OperatingSystem, OperatingSystemServicePack

Posted in Windows 2008 | Leave a Comment »


Get every new post delivered to your Inbox.

Join 682 other followers

%d bloggers like this: