Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Why companies should upgrade to Hyper-V 2012 R2

Posted by Alin D on April 9, 2014

Whether you’re running Windows 8.1 on your desktop and Hyper-V for a few test machines, or you’re deploying Windows 8.1 in a virtual desktop farm running Windows Server 2012 or Windows Server 2012 R2 on the back end, Hyper-V and Microsoft’s latest OS are good together — and here are some reasons why.

Generation 2 virtual machines

Windows 8.1 is designed to work within what Microsoft calls a “generation 2 virtual machine,” a new type of VM available exclusively under Hyper-V. This VM essentially strips away all pretense of virtualizing legacy PC deployments. Generation 2 VMs are UEFI-based, rather than relying on a BIOS, and there are no emulated devices. These VMs boot directly off a virtual SCSI and network adapters and also support secure boot — the preferred way to ensure only signed kernels are permitted to boot within the VM.

The only supported guest operating systems are 64-bit versions of Windows 8 and Windows Server 2012 and later. But Windows 8.1 is easier to set up within one of these VMs because it includes the necessary gen 2 keyboard driver needed to type in a product key for setup. Otherwise, you’re flying blind, or pecking at the on-screen keyboard or tearing your hair out.

Why choose a generation 2 VM over the existing VM standard? First, it gets rid of all of the goo that makes the VM look like a physical computer. Unlike in the late ’90s, people now virtualize first in most environments. Windows 8.1 has a deep knowledge of what it means to be on a VM, so there is no need for the trappings and the pretense of trying to look like a physical computer and the performance overhead that comes with that.

Deduplication for VDI deployments

If you want to run Windows 8.1 in a virtual desktop infrastructure (VDI) environment, with scores of hosts running Windows 8.1 guests in VMs that users log into as their daily drivers, then the Windows Server 2012 R2 deduplication feature can save an enormous amount of space while increasing performance.

Instead of storing multiple copies of Windows 8.1 in your guest VMs’ disk drives, which is simply wasted space, Windows Server 2012 R2 supports deduplication. It reads and then optimizes the files on a volume by storing only one copy of a file and replacing the duplicate copies with “pointers” to that single stored instance.

This deduplication feature supports open VHD or VHDX files. It can deduplicate files on a volume with running VMs so you don’t have to take your VDI farm down to begin using it. While the feature was first introduced in Windows Server 2012 Gold, the performance on the optimizer algorithm has improved, and it completes faster than it did in the previous release.

What’s the result? When the deduplication is finished, Windows can read those optimized files faster than if they were not deduplicated. And with a VDI deployment as opposed to just a typical file server, you can gain space savings up to 90% with minimal effect on performance.

Storage Quality of Service

Network Quality of Service (QoS) allows administrators to define caps for certain types of network traffic to ensure enough bandwidth exists for other activities and no one type of traffic sucks up the entire network pipe. Similarly, Hyper-V in Windows 8.1 and Windows Server 2012 R2 supports storage QoS. This feature lets you restrict disk throughput for overactive and disruptive VMs, which is great if you have a long-running process in one VM and you don’t want the overall I/O performance of your host machine dragged down. Since this feature is dynamically configurable, you can even adjust the QoS settings while the VM is running so you don’t interrupt the workload of a given VM.

Online VHDX resize

Have you ever had a VM that runs out of disk space? If you set up the VM with a fixed-size virtual disk rather than a dynamically expanding disk, this could pose a problem. However, the new Hyper-V in Windows 8.1 and Windows Server 2012 R2 allows you to increase and decrease the size of virtual hard disks of a VM while the VM is running — a hot resize, if you will. The VM in question can be running any guest OS, so there are no limitations to running Windows XP or even Linux with this feature. But the virtual hard disk file must be in the newer format (VHDX) as opposed to the older (more popular) VHD format.

 Enhanced VM connect

Hyper-V in Windows 8.1 and Windows Server 2012 R2 offers what Microsoft calls “enhanced virtual machine connect,” which is the ability to use the remote desktop protocol to connect to a VM — even if the network within the VM is down. Hyper-V uses VMBus, the internal communications channel for connecting VMs to the hypervisor. It then transmits RDP over the VMBus independent of the network connection. As part of this enhanced mode, you can drag and drop files between the host and the VM, which makes use of the clipboard sharing capabilities, and you can redirect local resources like smart cards, printers and USB devices right over that VMBus connection. This makes it easier to perform troubleshooting tasks and simple administration. This enhanced mode is enabled by default if you’re running Hyper-V on Windows 8.1, but disabled by default if you are running it on Windows Server 2012 R2.

About these ads

Posted in Windows 2012 | Leave a Comment »

Tips to optimize Transact-SQL queries

Posted by Alin D on November 14, 2013

SQL Server databases are the backbone of many enterprise applications, and good Transact-SQL (T-SQL) code is the best way to maximize SQL Server performance. Therefore, it is important for SQL developers to follow T-SQL best practices and guidelines when writing code. This article highlights some common T-SQL best practices to help ensure reliable, robust and efficient SQL code.

Choose appropriate data type

When you create a table, you must decide on the data type to use for column definitions. A data type defines the kind of data you can store in a column. You also use data types to define variables and stored procedure input and output parameters. You must select a data type for each column or variable appropriate to the data stored in that column or variable. In addition, you must consider storage requirements and choose data types that allow for efficient storage. For example, always use tinyint instead of smallint, int or bigint when you want to store whole positive integers between 0 and 255. This is because, tinyint is a fixed 1 byte field whereas smallint is 2 byte, int is 4 byte and bigint is an 8 byte fixed field.

Choosing the right data types also improves data integrity. For example, if you use a datetime data type for a column of dates, then only dates can be stored in this column. However, if you use a character or numeric data type for the column, then eventually someone is able to store any type of character or numeric data value in that column that does not represent a date.

Lastly, choosing the correct data type improves performance by resulting in the correct execution plan.

Avoid using DISTINCT or UNION clauses

Placing a DISTINCT clause in your T-SQL queries takes the results and removes duplicate rows. If you are sure that your query result set will only contain unique rows, one of the T-SQL best practices is to avoid using the DISTINCT clause, as it causes an unnecessary sorting operation.

The UNION clause also adds an additional sorting operation by eliminating duplicate records from two or more SELECT statements. If you use UNION to combine the results of two or more SELECT statements that contain only a single set of data, it is better to use the UNION ALL clause. The UNION ALL does not remove duplicates, and as a result requires the least SQL Server backend processing to perform the union operation.

Avoid using NOLOCK query hint

The NOLOCK query hint is one of the most common T-SQL best practices, but it can also be one of the worst. Most developers think that the risk of using a NOLOCK hint is the possibility of getting inconsistent data, since it only reads rows and doesn’t wait for others to commit other SQL statements such as SELECT and UPDATE. That is true, but there is more to it than just reading uncommitted rows. Transactions do more than just select, update and delete rows. For example, the transaction often requires an index that is updated or runs out of space on the data page. This may require the allocation of new pages and relocation of existing rows on the existing page to this new page, which is called a page split. Because of this, you may miss several rows or have rows twice, which usually is not allowed if you are running your queries without NOLOCK query hint.

Provide full column lists for SELECT or INSERT statement

Another T-SQL best practice is to always provide full column lists that are required for the SELECT and INSERT statement. For example, if you use SELECT * FROM in your code or in a stored procedure, the column list is resolved each time you run the SELECT statement. Moreover, SELECT or INSERT statements generate an error or returns a different set of columns if the underlying tables schema changes.

Therefore, when performing selects, avoid using SELECT * FROM [TableName], instead provide the full column list, as follows:

SELECT [col1],…[coln] FROM [TableName].

Similarly, when performing Inserts, use column list in the INSERT clause, as follows:

INSERT INTO [TableName] [col1],[col2]…[coln])

VALUES (‘Value1, Value2,…ValueN)

Some more T-SQL best practices

Use SET NOCOUNT ON. SET NOCOUNT ON within batches, stored procedures and triggers to increase performance. This is because, when specified, the statement does not return the number of rows affected.

Prefer EXISTS keyword over IN keyword. When checking for the existence of records, favor EXISTS keyword over the IN keyword. This is because the IN keyword operates on lists and returns the complete result set from subqueries before further processing. Subqueries using the EXISTS keyword return either TRUE or FALSE, which is faster because once the match is found, it will quit looking as the condition has proven true.

Avoid cursors. Avoid cursors as much as possible. Instead, use a set-based approach to updating or inserting data from one table to another.

Steer clear of dynamic SQL. Avoid using dynamic SQL; try to find alternatives that do not require dynamic SQL. If you use dynamic SQL, use sp_executesql instead of EXECUTE (EXEC) because sp_executesql is more efficient and versatile than EXECUTE. It supports parameter substitution and generates execution plans that are more likely to be reused by SQL Server.

Use schema-qualified object names. Refer to table names with schema name prefixes. For example, use SELECT * FROM [SchemaName].[TableName] instead of SELECT * FROM [TableName].

Posted in SQL | Leave a Comment »

SQL Server 2012 contained database from A to Z

Posted by Alin D on July 23, 2013

Of the many features introduced in SQL Server 2012, the SQL Server 2012 contained database could prove one of the most valuable. Unlike the typical SQL Server database, a SQL Server 2012 contained database is one that’s isolated from the SQL Server instance on which it resides, as well as from other databases on that instance. Such isolation makes managing databases easier, particularly when they’re being moved to a different instance or implemented in a failover cluster.

Prior to SQL Server 2012, all SQL Server databases were considered non-contained.Metadata specific to the database resided outside that database. The server’s default collation could impact queries against the database. And user authentication within the database was tied to the server-level logins defined on the instance.

In SQL Server 2012, all databases are still, by default, non-contained. However, you can now configure any non-system database as contained. That way, metadata will reside within the database it describes. In addition, because the collation model has been greatly simplified, all user data and temporary data will use the default database collation, and all other objects (metadata, temporary metadata, variables, among others) will use the catalog collation, which is Latin1_General_100_CI_AS_WS_KS_SC for all contained databases.

The most significant change that the SQL Server 2012 contained database brings is the “contained user,” a user account created specifically for the contained database. The account is not tied to a server-level login and provides access to the contained database only, without granting permission to other databases or to the instance as a whole.

SQL Server supports two types of contained users:

  • Database user with password: A local database account created with a username and password that are authenticated by the database.
  • Windows principal: A local database account based on a Windows user or group account but authenticated by the database.

You can add either one or both account types to a contained database. In fact, you can still add login-based accounts as well. That’s because SQL Server 2012 supports what are referred to as “partially contained databases,” rather than fully contained ones.

In a fully SQL Server 2012 contained database, no dependencies, such as a Service Broker route or login-based user account, can exist outside the database. But a partially contained database can support both contained and non-contained elements. That means, for example, that you can provide access to the database either through login-based accounts, contained user accounts or both. However, you can still achieve the equivalent of a fully contained database by eliminating any non-contained elements. (Whether SQL Server will eventually support fully contained databases is yet to be seen.)

The SQL Server 2012 contained database

After you’ve isolated your SQL Server 2012 contained database, you can easily move it from one SQL Server instance to another, without having to move a set of SQL Server logins. The contained database stores all the information it needs within that database. This process also makes it easier to set up your high-availability clusters. Because users connect directly to the database, they can easily connect to a second database if failover occurs.

Even if you’re not moving or clustering your databases, the SQL Server 2012 contained database can make user account management easier because you’re not trying to administer both SQL Server logins and database user accounts. You grant access to specific users to specific databases, without those users being able to access anything outside them.

Yet all this good news doesn’t come without a few downsides. For example, a contained database cannot use replication, change tracking or change data capture. And a contained database can raise security concerns. For instance, users granted the ALTER ANY USERpermission can create user accounts and grant access to the database. In addition, the password hashes associated with contained user accounts are stored within the database, making them more susceptible to dictionary attacks. Plus, the contained user accounts cannot use Kerberos authentication, which is available only to the SQL Server login accounts that use Windows Authentication.

Despite their limitations, if contained databases are carefully implemented, you can sidestep some of the security issues and reap the benefits that database isolation provides. The SQL Server contained database offers a level of portability and manageability not seen before in SQL Server. Moving databases is easier. Failover is easier. Managing access is easier. Indeed, the contained database feature in SQL Server 2012 could prove beneficial for any organization looking to streamline its operations.

Configuring and implementing a SQL Server contained database

Before you can configure a SQL Server contained database, you must enable containment on your instance of SQL Server 2012. To do so, run the sp_configure system stored procedure and set the contained databaseauthentication option to 1, as shown in the following T-SQL script:

EXEC sp_configure ‘contained database authentication’, 1;


As you can see, you must also run the RECONFIGURE statement for your setting to be implemented. Once you’ve done so, you’re ready to set up a SQL Server 2012 contained database. In your database definition, include the CONTAINMENT clause and set the containment type to PARTIAL, as shown in the following example:

USE master;


You can just as easily include the CONTAINMENT clause in an ALTERDATABASEstatement. In either case, once you’ve set up the database to be contained, you’re ready to add a contained user. In the following T-SQL script, the CREATE USER statement defines the user1 account and assigns a password and default schema:

USE ContainedDB;


EXEC sp_addrolemember ‘db_owner’, ‘user1′

Notice that the CREATE USER statement is followed by the sp_addrolememberstored procedure, which assigns the user to the db_owner role.

As the examples have demonstrated, you need to take only three steps to set up a SQL Server 2012 contained database: Enable containment, configure the database for containment, and create the contained user. And if you don’t want to script these settings, you can instead use the SQL Server Management Studio (SSMS)interface to configure them.

After you set up your contained environment, you should try to connect to the contained database by using the contained user to test the connection. In Object Explorer in SSMS, click the Connectbutton and then click Database Engine. When the Connect to Server dialog box appears, enter the instance name and credentials for the contained user.

If you click the Connect button at this point, you’ll receive an error. That’s because you must first specify the target database when connecting with a contained user account. (The database must be part of the connection string.) To add the database, click theOptions button and type the name of the target database into theConnect to database text box.

Now when you click Connect, you’ll be connected to the specified instance, with access only to the contained database.  Notice that the user and database names are included with the instance name. Also notice that only the ContainedDB database is included in the list of databases and that user1 is listed as one of the user accounts.

When working with contained databases, you’ll often want to ensure they’re as fully contained as possible. SQL Server 2012 provides two handy tools for working with containment:

  • Sys.dm_db_uncontained_entities: A system view that lists any noncontained objects in the database. You can use this view to determine what items to address to ensure your database is as contained as possible.
  • Sp_migrate_user_to_contained: A system stored procedure that converts a login-based user to a contained user. The stored procedure removes any dependencies between the database user and the login accounts.

By using these tools, you can achieve a status of full containment, making it easier to manage the database going forward.

Posted in SQL, TUTORIALS | Tagged: | Leave a Comment »

What you need to consider for SQL Security

Posted by Alin D on July 3, 2013

Most of the time SQL server is a repository of sensitive information for organizations, and that is why it is important to ensure that only authorized users have access to this sensitive information. However, securing SQL Server in a way that won’t create errors is not an easy task, and as database administrators (DBAs), we have to perform a series of additional steps to harden security configuration of our SQL Server implementation. Below i will show what i am usually taking in consideration to secure an SQL server.


SQL Server supports two modes of authentication: Windows Authentication and Mixed Mode Authentication. In accordance with SQL Server security best practices, always choose Windows Authentication for your SQL Server installation unless legacy applications require Mixed Mode Authentication for backward compatibility and access.

Windows Authentication is more secure than Mixed Mode Authentication and, when enabled, Windows credentials (that is Kerberos or Windows NT LAN Manager [NTLM] authentication credentials) are trusted to log on to SQL Server. Windows logins use a number of encrypted messages to authenticate SQL Server and the passwords are not passed across the network during authentication. Moreover, Active Directory provides an additional level of security with the Kerberos protocol. As a result, authentication is more reliable and managing it can be reduced by leveraging Active Directory groups for role-based access to SQL Server. In comparison to Windows Authentication mode, Mixed Mode Authentication supports both Windows accounts and SQL-Server-specific accounts to log into SQL Server. The logon passwords of SQL logins are passed over the network for authentication, which makes SQL logins less secure than Windows logins.

Secure sysadmin account

The sysadmin (sa) account is vulnerable when it exits unchanged. Potential SQL Server attackers are aware of this, and it makes hacking one step easier if they take control of this powerful account. To prevent attacks on the sa account by name, rename the sa account to a different account name. To do that, in Object Explorer expand Logins, then right-click sa account and choose Rename from the menu. Alternatively, execute the following T-SQL script to rename the sa account:

USE [master]

ALTER LOGIN sa WITH NAME = [<New-name>]

In addition to this, disable the sa account on your SQL Server instance.

Use complex passwords for sa and SQL-Server-specific logins

When Mixed Mode Authentication is used, ensure that complex passwords are used for sa and all other SQL-Server-specific logins on SQL Server. First, check the “Enforce password expiration” and “Enforce password policy” options for sa and all other SQL logins. These two options ensure that all other SQL-Server-specific logins abide by the login policies of the underlying operating system. In addition to this, set the MUST_CHANGE option for any new SQL login. This ensures that logins must change their passwords on first logon.

Membership of sysadmin fixed-server role and CONTROL SERVER permission

Carefully choose the membership of sysadmin fixed-server roles because members of this role can do whatever they want on SQL Server. Moreover, do not explicitly grant CONTROL SERVER permission to Windows logins, Windows Group logins and SQL logins because logins with this permission get full administrative privileges over a SQL Server installation. By default, the sysadmin fixed-server role has this permission granted explicitly.

SQL Server Administration

Avoid managing SQL Server instances using sa or any other SQL login account that has been granted CONTROL SERVER permission or is a member of sysadmin fixed-server role. Instead, institute dedicated Windows logins for DBAs, and assign these logins sysadmin rights on SQL Server for administration purposes. To grant permissions to users, use built-in fixed server roles and database roles, or create your own custom server roles and database roles that meet your needs of finer control over permissions.

Revoke guest user access

By default, guest user exists in every user and system database, which is a potential security risk in a lock down environment because it allows database access to logins who don’t have associated users in the database. Because of this potential security risk, disable guest user access from all user and system databases (excluding msdb). This ensures that public server role members are not able to access user databases on SQL Server instance unless they have been assigned explicit access to these databases.

Limit permissions assigned to a public role

Due to potential security risks, revoke public role access on the following extended stored procedures:

Stored procedures

Furthermore, do not explicitly assign permissions to a public role on user and system stored procedures. To list the stored procedures that are available to a public role, execute the following query:

SELECT  o.[name] AS [SPName]  ,u.[name] AS [Role]

FROM [master]..[sysobjects] o

INNER JOIN [master]..[sysprotects] p

ON o.[id] = p.[id]

INNER JOIN [master]..[sysusers] u

ON P.Uid = U.UID

AND p.[uid] = 0

AND o.[xtype] IN ('X','P')

Reduce SQL Server Surface Area

Configure SQL Server installation with only required features, and disable unwanted features after installation using SQL Server system’s surface area. You can also use the Policy-based Management feature to create system policies for implementing granular configuration settings for one or more SQL Server systems.

Hardening SQL Server Ports

Another SQL Server security best practice is to change the default ports associated with SQL Server installation by using SQL Server Configuration Manager. Furthermore, use specific TCP ports instead of dynamic ports. In addition, make sure that common TCP ports, such as 1433 and 1434 are not used for the client’s requests and communication because, these ports are well known that makes them a common target for hackers.

Disable SQL Server Browser Service

Make sure that SQL Server Browser Service is only running on SQL Servers where multiple instances of SQL Servers are running on a single server. SQL Server Browser Service enumerates SQL Server Information on the network, which is a potential security threat in a lock-down environment.

SQL Server service accounts

Create dedicated low-privilege domain accounts to run SQL Server services. In addition to this, review the membership of SQL Server service accounts on a regular basis, and ensure that they are not members of any domain users group or local groups that would grant them unnecessary permissions. For more information on the permission each SQL Server service account requires, see Configure Windows Service Accounts and Permissions.

Secure SQL Server ErrorLogs and registry keys

Secure SQL Server ErrorLogs and registry keys using NTFS permissions because they can reveal a great deal of information about the SQL Server instance and installation.


As the time passes Securing data has become the most vital part, and we must agree and honor it. Because this is the information that contains our financial, social, business and historical data; and as a DBA it is our prime responsibility to make sure that this has been taken care and secure enough. These are the key points that I’ve collected so far.

Posted in Security, SQL | Leave a Comment »

What you don`t know about Hyper-V virtual Switches

Posted by Alin D on April 6, 2013

One of the most significant improvement in Windows 2012 is the presence of a virtual switch at no additional costs. Below you can find some things you might not know about the extensible switch.

Replacement the virtual switch within Windows Server 2012 Hyper-V with a Cisco switch

Perhaps replace isn’t the right word, but you can certainly augment the virtual switch to the point of complete transformation. Cisco is offering the Nexus 1000V virtual switch to install alongside the virtual switch in Windows Server 2012, turning it into a fully managed, standards-compliant switch with a console — one that even supports software-defined networking (SDN) and the Cisco Open Network Environment. You can do this with competitor VMware, but at an additional cost; you get this capability built into the underlying operating system license with Hyper-V.

There are three supported types of extensibility with the switch

hird parties and in-house development teams can create these switch extensions to extend the functionality of the switch, like Cisco did. You can create capturing extensions that read and inspect traffic but are unable to modify or drop packets. You also can create filtering extensions that inspect and read traffic, drop, insert and modify packets directly into the transmissions stream; firewall extensions for the virtual switch typically won’t use this type of filter. And finally, you can create forwarding extensions that define the destination of packets to different places, as well as capture and filter traffic. The capabilities of each type of extension build on one another.

The extensible switch supports access control lists via ports

This is really useful in multi-tenant deployments, where there are hosted virtual machines (VMs) for a variety of clients on the same set of machines, or for organizations with Chinese firewall-type regulations that require data and access segregation. These companies can now use the same type of security right in the Hyper-V virtual network that has been possible in physical switches and network security devices. The Hyper-V virtual switch can filter port traffic based on IP addresses or ranges or via MAC addresses to identify the specific virtual network interface cards involved and ensure that networks are isolated. This also works with the isolated or private VLAN feature that lets the administrator set up isolated communities of tenants by securing traffic over individual VLANs within the virtual network.

There are trunking tools new to Windows that exist within the Hyper-V virtual switch

There is a set of traffic-routing capabilities that can run within a VM — making it like an appliance — as a switch extension (as previously described) or as a service on the hypervisor host. The designated monitoring port copies traffic to the specified VM. When you set the “trunk mode” on a given virtual switch port, all traffic on the virtual network is routed to that VM, making it sit “in front” of the traffic. Traffic is then distributed to other VMs. You can also create a capture extension instance that copies the traffic to a given service for other types of inspection or analysis, and you can set up another extension to tunnel traffic to another network destination as well.

You can manage the Hyper-V extensible virtual switch as an independent device from within System Center 2012

If you have deployed System Center 2012 Service Pack 1, you can add a virtual switch extension manager right to the Virtual Machine Manager console to monitor and manage the settings, features and capabilities of your VMs and the switch from within a single console. You can also do this with other virtual switch extension vendors like Cisco, but you need to first obtain provider software from the vendor, install it on the Virtual Machine Manager server and restart the service.

Posted in Windows 2012 | Leave a Comment »

Windows 2012 Cluster what you must know

Posted by Alin D on March 24, 2013

Like it or not, Active Directory is a vital component of Windows Failover Clusters and can adversely affect its stability. Have you ever experienced the dreaded NETLOGON event, indicating that no domain controllers could be contacted, so your cluster fails to start? How about being unable to create a cluster or virtual servers due to restrictive organizational unit permissions? Microsoft has recognized these and other common AD problems and made significant efforts to fix these shortcomings in Windows Server 2012.

Cluster startup without Active Directory

Perhaps one of the most catastrophic events a cluster can face is when it can’t contact a domain controller (DC) during formation. A different scenario leading to this same problem occurs when you attempt to virtualize your DCs as virtual machines in a Windows failover cluster. The cluster must contact a DC to start, but the virtual DC can’t start until the cluster does. This reliance on AD for a cluster to form has been eliminated in Windows Server 2012.

You’ll need to create a Windows Server 2012 cluster by contacting a DC and storing its authentication data in AD, along with any cluster members, for this function to work. Then existing clusters can start up without having to first contact a DC for authentication. Prior to Windows Server 2012, cluster startup was supported, although not recommended, to run the AD Services role on cluster members to make them more resilient to AD connectivity issues. It is no longer necessary, nor is it supported to run domain controllers as cluster nodes as Microsoft documents in KB 281662.

Flexible OU administration

Another AD shortcoming that has been addressed in Windows Server 2012 is the ability to specify in which organizational units (OU) the computer objects for the cluster will be created. In the past, when a cluster was created, the Cluster Name Object (CNO) was created in the default Computers container in the OU where the cluster members reside. This prevented admins from delegating control to specific OUs for the purpose of managing Failover Clusters without going through painful prestaging efforts.

In Windows Server 2012, both the Create Cluster Wizard and the PowerShell cmdlet New-Cluster allow you to specify in which OU the CNO will be created. In addition, any Virtual Computer Objects (VCO) for the network names associated with highly available cluster roles will be created in the same OU. The user account that creates the cluster must have the Create Computer Objects permission in the specified OU. In turn, the newly created CNO must have Create Computer Objects permission to create VCOs. You can move all these computer objects to a different OU at any point — without disrupting the cluster. Keep in mind the custom OU must be created before you create your cluster.

Unfortunately, the syntax for specifying a particular OU where the CNO should be created isn’t intuitive in the Create Cluster Wizard or the corresponding PowerShell New-Cluster cmdlet. The Create Cluster Wizard will create appropriate syntax for specifying the distinguished name of the cluster, along with the OU where it will reside. In our example, the name of the cluster is Win2012Clus1, and its CNO will be created in the ClusterServers OU in the fictitious domain.

Next, look at the syntax for creating a cluster using the PowerShell New-Cluster cmdlet. In this example, the command creates a cluster with the name Win2012Cluster1, placing the CNO in the ClusterServers OU in the fictitious domain using a static IP address of

fter you create the Windows failover cluster, use Active Directory Users and Computers to view and manage the new CNO placed in the custom OU called ClusterServers. Any new cluster roles that are configured will create their VCO in the same OU.

Additional cluster and Active Directory enhancements

With Windows Server 2012, you can have a failover cluster located in a remote branch office or behind a DMZ with a Read-Only Domain Controller (RODC). While the CNO and VCOs must be created beforehand on a RWDC as described by Microsoft, the server supports the configuration.

Finally, AD Cluster computer objects are now created with the Protect Object from Accidental Deletion flag to ensure automated stale object scripts don’t delete them. If the account that creates the cluster doesn’t have this right for the OU, it will still create the object, but won’t protect it from accidental deletion. A system event ID 1222 will be logged to alert you, and you can follow Microsoft KB 2770582 to fix the issue.

Microsoft has taken several steps in Windows Server 2012 to address the AD pitfalls that Windows failover clusters have endured over the years. Some of the top integration enhancements include booting clusters without AD, more flexible OU administration, support for clusters in Branch Offices and DMZs with RODCs and protecting cluster AD objects from deletion.


Posted in Windows 2012 | Leave a Comment »

What you need to know about Deduplication in Windows Server 2012

Posted by Alin D on March 8, 2013

Talk to most administrators about deduplication and the usual response is: Why? Disk space is getting cheaper all the time, with I/O speeds ramping up along with it. The discussion often ends there with a shrug.

But the problem isn’t how much you’re storing or how fast you can get to it. The problem is whether the improvements in storage per gigabyte or I/O throughputs are being outpaced by the amount of data being stored in your organization. The more we can store, the more we do store. And while deduplication is not a magic bullet, it is one of many strategies that can be used to cut into data storage demands.

Microsoft added a deduplication subsystem feature in Windows Server 2012, which provides a way to perform deduplication on all volumes managed by a given instance of Windows Server. Instead of relegating deduplication duty to a piece of hardware or a software layer, it’s done in the OS on both a block and file level — meaning that many kinds of data (such as multiple instances of a virtual machine) can be successfully deduplicated with minimal overhead.

If you plan to implement Windows Server 2012 deduplication technology, be sure you understand these seven points:

1. Deduplication is not enabled by default

Don’t upgrade to Windows Server 2012 and expect to see space savings automatically appear. Deduplication is treated as a file-and-storage service feature, rather than a core OS component. To that end, you must enable it and manually configure it in Server Roles | File And Storage Services | File and iSCSI Services. Once enabled, it also needs to be configured on a volume-by-volume basis.

2. Deduplication won’t burden the system

Microsoft put a fair amount of thought into setting up deduplication so it has a small system footprint and can run even on servers that have a heavy load. Here are three reasons why:

a. Content is only deduplicated after n number of days, with n being 5 by default, but this is user-configurable. This time delay keeps the deduplicator from trying to process content that is currently and aggressively being used or from processing files as they’re being written to disk (which would constitute a major performance hit).

b. Deduplication can be constrained by directory or file type. If you want to exclude certain kinds of files or folders from deduplication, you can specify those as well.

c. The deduplication process is self-throttling and can be run at varying priority levels. You can set the actual deduplication process to run at low priority and it will pause itself if the system is under heavy load. You can also set a window of time for the deduplicator to run at full speed, during off-hours, for example.

This way, with a little admin oversight, deduplication can be put into place on even a busy server and not impact its performance.

3. Deduplicated volumes are ‘atomic units’

‘Atomic units’ mean that all of the deduplication information about a given volume is kept on that volume, so it can be moved without injury to another system that supports deduplication. If you move it to a system that doesn’t have deduplication, you’ll only be able to see the nondeduplicated files. The best rule is not to move a deduplicated volume unless it’s to another Windows Server 2012 machine.

4. Deduplication works with BranchCache

If you have a branch server also running deduplication, it shares data about deduped files with the central server and thus cuts down on the amount of data needed to be sent between the two.

5. Backing up deduplicated volumes can be tricky

A block-based backup solution — e.g., a disk-image backup method — should work as-is and will preserve all deduplication data.

File-based backups will also work, but they won’t preserve deduplication data unless they’re dedupe-aware. They’ll back up everything in its original, discrete, undeduplicated form. What’s more, this means backup media should be large enough to hold the undeduplicated data as well.

The native Windows Server Backup solution is dedupe-aware, although any third-party backup products for Windows Server 2012 should be checked to see if deduplication awareness is either present or being added in a future revision.

6. More is better when it comes to cores and memory

Microsoft recommends devoting at least one CPU core and 350 MB of free memory to process one volume at a time, with around 100 GB of storage processed in an hour (without interruptions) or around 2 TB a day. The more parallelism you have to spare, the more volumes you can simultaneously process.

7. Deduplication mileage may vary

Microsoft has crunched its own numbers and found that the nature of the deployment affected the amount of space savings. Multiple OS instances on virtual hard disks (VHDs) exhibited a great deal of savings because of the amount of redundant material between them; user folders, less so.

In its rundown of what are good and bad candidates for deduping, Microsoft notes that live Exchange Server databases are actually poor candidates. This sounds counterintuitive; you’d think an Exchange mailbox database might have a lot of redundant data in it. But the constantly changing nature of data (messages being moved, deleted, created, etc.) offsets the gains in throughput and storage savings made by deduplication. However, an Exchange Server backup volume is a better candidate since it changes less often and can be deduplicated without visibly slowing things down.

How much you actually get from deduplication in your particular setting is the real test for whether to use it. Therefore, it’s best to start provisionally, perhaps on a staging server where you can set the “crawl rate” for deduplication as high as needed, see how much space savings you get with your data and then establish a schedule for performing deduplication on your own live servers.

Posted in Windows 2012 | Tagged: , | Leave a Comment »

Dynamic Witness improves Windows 2012 Cluster High Availability

Posted by Alin D on February 21, 2013

Determined to make Windows Failover Clusters as resilient as possible, Microsoft has once again made significant improvements to its quorum mechanisms in Windows Server 2012. The Dynamic Quorum Management option allows the cluster to dynamically adjust the quorum (or majority) of votes required for the cluster to continue running. This prevents the loss of quorum when nodes fail or shut down sequentially, allowing the cluster to continue running with less than a majority of active nodes.

In addition to dynamic quorum, multisite geoclusters now benefit from the ability to specify which nodes receive votes and which ones don’t. This allows you to bias a particular site (e.g., the primary site) to have the controlling votes, or nodes, to maintain quorum. This also prevents a split-brain scenario from occurring as the secondary site tries to update the cluster database when the primary site is down

Configuring Dynamic Quorum in Windows Server 2012

The principle behind quorum in a failover cluster environment is to ensure that only a majority of nodes can form and participate in a cluster. This prevents a second subset of nodes from forming a separate cluster that can access the same shared resources in an uncoordinated fashion, which can lead to corruption. When nodes are shut down or fail, there are fewer active nodes remaining to maintain the static quorum value of votes needed for the cluster to function. The new Dynamic Quorum Management dynamically adjusts the votes of remaining active nodes to ensure that quorum can be maintained in the event of yet another node failure or shutdown.

There are a few requirements that must be met before the Dynamic Quorum mechanism kicks in. First, Dynamic Quorum must be enabled, which it is, by default, in Windows Server 2012. The Failover Cluster Manager can be used to view or modify the Dynamic Quorum option by running the Configure Cluster Quorum Wizard. Start the wizard by highlighting the cluster in the left-hand pane, right-clicking on it, selecting More Actions and then choosing Configure Cluster Quorum Settings.


The Quorum Wizard prompts you to select from several different quorum configurations depending on your environment (Typical, Add/Change or Advanced). By default, the cluster will use the typical settings for your configuration to establish the quorum management options. You can also add or change the quorum witness if one was selected during the installation process.

To view or change the Dynamic Quorum Management option, use the Advanced quorum configuration option, as seen above. Stepping through the Quorum Wizard, it will prompt you to Configure Quorum Management. This is where you can view or change the Dynamic Quorum option.

Alow Dynamic Manage

You can also view or modify the cluster’s Dynamic Quorum setting by using PowerShell cmdlets. The first cmdlet, Get-Cluster, as shown in below, reveals the current Dynamic Quorum setting (0=disabled, 1=enabled). You can then use PowerShell to enable Dynamic Quorum by establishing the variable $cluster with Get-Cluster and then set the property DynamicQuorum to a value of 1.

With Dynamic Quorum enabled, the next condition that must be met is that the cluster must be up and running and currently sustaining quorum based on the initial cluster configuration. The final condition for Dynamic Quorum to work is that any subsequent node failures or shutdowns must be experienced sequentially — not with multiple nodes going down at the same time. A lengthier cluster regroup operation would occur if multiple nodes exited the cluster simultaneously.

Dynamic Wight

You can use PowerShell to view the number of votes and observe the inner workings of the Dynamic Quorum mechanism. By default, each server in the cluster gets a single vote, or NodeWeight. When Dynamic Quorum is enabled, an additional property called DynamicWeight is used to track a server’s dynamic vote toward quorum. The cluster will adjust a node’s dynamic weight to zero, if necessary, to avoid losing quorum, should another node exit the cluster. The PowerShell cmdlet reveals the NodeWeight and DynamicWeight for a two-node cluster.

PowerShell cmdlet Get-ClusterNode

Dynamic Quorum allows cluster nodes to be individually shut down or fail to the point where just a single node is left functioning (“last man standing”). Just as quorum is dynamically adjusted downward as nodes fail or are shut down in the cluster, quorum is adjusted upward as nodes are rebooted back into the cluster.

Using weighted votes to assign nodes

The other major enhancement to the quorum mechanism in Windows Server 2012 is the ability to specify which nodes in a cluster receive a vote. As mentioned, all nodes receive a vote that contributes toward quorum by default. In multisite geocluster configurations, it may be beneficial to give nodes in the primary site a vote to ensure they keep running in the event of a network failure between sites. Nodes in the secondary site can be configured with zero votes so they cannot form a cluster.

You can use the Quorum Wizard (Advanced Quorum Configurations) to configure whether a node receives a vote. The wizard also allows you to see how Node1 is given a vote and Node2 is not.

Quorum Wizard


Alternatively, you can use PowerShell to specify whether a node receives a vote. Use the Get-ClusterNode cmdlet to set the NodeWeight for Node2 back to 1 so that it receives a vote.

Windows Server 2012 has made significant improvements to the quorum mechanism, resulting in more resilient Failover Clusters. Dynamic Quorum Management takes the worry out of whether enough servers are active to achieve or maintain quorum if systems should fail or shut down. Multisite geoclusters also use weighted votes to specify which primary site should continue running in the event of intersite network failures.

Posted in TUTORIALS, Windows 2012 | 1 Comment »

Unsed features in SQL Server 2012

Posted by Alin D on February 6, 2013

After more than 25 years of working with Microsoft SQL Server, you’d think pretty much everything has been done at least once. I thought it would be a challenge to find anything surprising in a product with roots going back to the mid-1980s. But there have recently been two pretty major changes in SQL Server. Columnstore Indexes and the Hekaton in-memory enhancements offer massive, game-changing improvements in performance great enough to be called a surprise.

Columnstore Indexes

Columnstore Indexes were bundled with Microsoft SQL Server 2012 borrowing on techniques originally developed for the PowerPivot in-memory store. Columnstore changes the way that rows are stored; instead of traditional row-by-row storage, data is stored one column at a time in a new layout that bunches up around a million column values in one large blob structure. This structure allows for incredible data compression.

A new method of processing Microsoft refers to as fast batch mode also speeds up query processing in SQL Server 2012. As Dr. David Dewitt explained at SQL Pass in 2010, the closeness of the successive columns values works well with the nature of modern CPUs by minimizing the data movement between levels of cache and the CPU.

There is, however, one big limitation to the current implementation of Columnstore Indexes. They are read-only, which means that the tables they index will also be read-only. Any table that has a Columnstore Index will not allow any inserts, updates or deletes. To change the data, the Columnstore Index has to be dropped, the necessary changes made and the Columnstore Index rebuilt. This isn’t the kind of operation that’s friendly to an online transaction processing (OLTP) system, which is what makes it solely a data warehousing, online analytical processing (OLAP) feature. It also puts a premium on partitioning on any table with a Columnstore Index. In the next major release of SQL Server, Microsoft is promising enhancements that lift the updatability restriction and also allow the Columnstore to be the clustered index.

I’ve had a chance to try out a Columnstore Index on a few tables. What I’ve found is that it works great when the nature of the query dovetails with the Columnstore. As a rule of thumb, the more columns in the table, the better the results. This is because SQL Server can avoid reading a large part of the index. In other situations, such as one narrow entity-attribute-value table that I work with frequently, the results are mixed. Summary queries that aggregate are much faster, to the tune of three seconds instead of 300, but queries that return all the columns of a small set of rows aren’t helped at all. I’ll be using Columnstore Indexes a lot looking for the 100 times speed improvements.


While Columnstore Indexes make data warehouse applications faster, Hekaton is intended for the other end of the application spectrum: high-volume OLTP systems. Websites, exchanges, manufacturing systems and order-entry systems that execute large numbers of, usually small, transactions are Hekaton’s target. The Hekaton extensions to SQL Server are what is known as an “in-memory” database, but Microsoft has combined several technologies to pump up transaction volume up to 50 times above what could previously be achieved. Hekaton will be included in the next release of SQL Server, which is not yet scheduled for shipment.

Hekaton starts with tables that are stored on disk but are pulled completely into system RAM.  This means will be limited to smaller tables or require a design that separates data with high activity from historical data. This requirement works well with the obvious server trend towards larger and larger amounts of RAM. It’s not uncommon to work with servers with 500 gigabytes or up to two terabytes of RAM. That’s plenty of room for the active data in most applications. The changes don’t stop with the approach to storage.

Code in a Hekaton system is written in good old T-SQL, just like we’ve used for years. But unlike traditional T-SQL, Hekaton code is compiled to native machine code and there’s no interpreter. T-SQL is great for data manipulation, but zipping through business logic isn’t one of its strengths; native compilation should speed things up significantly.

As servers have gained more cores, which are SQL Server’s mechanism for synchronizing data access, contention issues will arise as the system scales up. Hekaton bypasses these issues by implementing its own locking mechanism based on optimistic transactions that are optimized for an in-memory database. This allows many transactions to run simultaneously. However, the ability to mix Hekaton tables and other tables in structures such as a join may be limited. There will be other restrictions as well.

By combining the in-memory data handling, compiled code, and new concurrency control mechanism, the preliminary benchmarks for Hekaton look very promising. At SQL PASS 2012 I saw the development team demonstrate a 25-times throughput improvement in transaction volume. That’s 25 times– not just 25%. These are the kinds of surprising changes still in the cards for SQL Server. I’m looking forward to working with SQL Server more in the near future.

Posted in SQL 2012 | Tagged: , , , | Leave a Comment »

Powershell CMDLET for managing Windows Server 2012 Cluster

Posted by Alin D on January 17, 2013

If you manage Windows Failover Clusters, you may notice the Cluster.exe CLI command is missing after you install the Windows Server 2012 Failover Clustering feature. For years, systems administrators have used Cluster.exe to script the creation of clusters, move failover groups, modify resource properties and troubleshoot cluster outages. Yes, the Cluster.exe command still exists in the Remote Server Administration Tools (RSAT), but it’s not installed by default and is considered a thing of the past.

Another thing you may soon notice in Windows Server 2012 is the PowerShell and Server Manager Icons pinned to your taskbar. What you may not notice is that the default installation of the Windows Server 2012 operating system is now Server Core and contains more than 2,300 PowerShell cmdlets. Microsoft is sending a clear message that Windows servers should be managed just like any other data center server, both remotely and through the use of scripting. With Windows, that means PowerShell.

Fortunately, Windows Server Failover Clustering is no stranger to PowerShell. With Windows Server 2008 R2, 69 cluster-related PowerShell cmdlets assist with configuring clusters, groups and resources. This tip explores the new PowerShell cmdlets in Windows Server 2012 failover clusters.

With Windows Server 2012, a total of 81 failover cluster cmdlets can be used to manage components from PowerShell. New cluster cmdlets can perform cluster registry checkpoints for resources (Add-ClusterCheckpoint), monitor virtual machines for events or service failure (Add-ClusterVMMonitoredItem) and configure two new roles: Scale-Out File Servers (Add-ClusterScaleOutFileServerRole) and iSCSI Target Server (Add-ClusteriSCSITargetServerRole).
Windows PowerShell ISE

Windows PowerShell ISE

To list all the failover cluster cmdlets, use the PowerShell cmdlet “Get-command –module FailoverClusters” (Figure 1). I am using the built-in Windows PowerShell Integrated Scripting Environment (ISE) editor, which helps admins get familiar with all the failover clustering cmdlets.

In addition to the FailoverCluster cmdlets, Microsoft has several new modules of PowerShell cmdlets, including ClusterAwareUpdating with 17 new cmdlets, ClusterAware ScheduledTasks with 19 new cmdlets and iSCSITarget with 23 new cmdlets. There are many Cluster Aware Updating cmdlets, such as adding the CAU role (Add-CauClusterRole), getting an update report (Get-CauReport) or invoking a run to scan and install any new updates (Invoke-CauRun).

Cluster-Aware scheduled tasks are new to Windows Server 2012 and the Task Scheduler now integrates with failover clusters. A scheduled task can run in one of three ways:

ClusterWide on all cluster nodes
AnyNode on a random node in the cluster
ResourceSpecific on a node that owns a specific cluster resource

The new ScheduledTasks cmdlets create a cluster-aware scheduled task. In the table, you can see the cmdlets that register, get and set Clustered Scheduled task properties.
PowerShell Cmdlet     Description
Register-ClusteredScheduledTask     Creates a new clustered scheduled task
Unregister-ClusteredScheduledTask     Deletes a clustered scheduled task
Set-ClusteredScheduledTask     Updates existing cluster task
Get-ClusteredScheduleTask     Enumerates existing clustered tasks

To get an idea of how to use these PowerShell cmdlets, you first assign an action and trigger variable. The action variable specifies the program that is to be executed, such as the Windows calculator in the example below. The trigger variable sets up when the task is to be executed. The resulting cmdlets to schedule the task to run cluster-wide daily at 14:00 would look like this:

PS C:\> $action = New-ScheduledTaskAction –Execute C:\Windows\System32\Calc.exe

PS C:\> $trigger = New-ScheduledTaskTrigger -At 14:00 –Daily

PS C:\> Register-ClusteredScheduledTask -Action $action -TaskName ClusterWideCalculator -Description “Runs Calculator cluster wide” -TaskType ClusterWide -Trigger $trigger

TaskName         TaskType
——–         ——–
ClusterWideCa… ClusterWide

PS C:\>
Windows PowerShell Task Scheduler

While only PowerShell can be used to register, get/set and unregister Cluster-Aware scheduled tasks, you can use the Task Scheduler in Computer Management to view the cluster jobs (Figure 2).
iSCSI Target cmdlets
Cmdlets Failover Clusters3

Finally, failover clusters can now be configured with a highly available iSCSI Target Server. This role allows you to create and serve iSCSI LUNs in a highly available fashion to clients across your enterprise. To add this new cluster role, use the Cmdlet Install-WindowsFeature –name FS-iSCSITarget-Server (or use Server Manager) to install the iSCSI Target Server role. Then, use the new cmdlet Add-ClusteriSCSITargetServerRole to create the iSCSI Target resource and associate it with shared storage. You can then leverage the new iSCSI Target cmdlets to configure iSCSI LUNs (Figure 3).

There is no shortage of PowerShell cmdlets in Windows Server 2012 to help you manage your failover clusters. In addition to creating, configuring and troubleshooting your cluster, you can use PowerShell cmdlets to add new scale-out file server, iSCSI Target Server roles, clustered scheduled tasks and Cluster-Aware Updating.

Posted in Windows 2012 | Tagged: , , | Leave a Comment »


Get every new post delivered to your Inbox.

Join 167 other followers

%d bloggers like this: