Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Why companies should upgrade to Hyper-V 2012 R2

Posted by Alin D on April 9, 2014

Whether you’re running Windows 8.1 on your desktop and Hyper-V for a few test machines, or you’re deploying Windows 8.1 in a virtual desktop farm running Windows Server 2012 or Windows Server 2012 R2 on the back end, Hyper-V and Microsoft’s latest OS are good together — and here are some reasons why.

Generation 2 virtual machines

Windows 8.1 is designed to work within what Microsoft calls a “generation 2 virtual machine,” a new type of VM available exclusively under Hyper-V. This VM essentially strips away all pretense of virtualizing legacy PC deployments. Generation 2 VMs are UEFI-based, rather than relying on a BIOS, and there are no emulated devices. These VMs boot directly off a virtual SCSI and network adapters and also support secure boot — the preferred way to ensure only signed kernels are permitted to boot within the VM.

The only supported guest operating systems are 64-bit versions of Windows 8 and Windows Server 2012 and later. But Windows 8.1 is easier to set up within one of these VMs because it includes the necessary gen 2 keyboard driver needed to type in a product key for setup. Otherwise, you’re flying blind, or pecking at the on-screen keyboard or tearing your hair out.

Why choose a generation 2 VM over the existing VM standard? First, it gets rid of all of the goo that makes the VM look like a physical computer. Unlike in the late ’90s, people now virtualize first in most environments. Windows 8.1 has a deep knowledge of what it means to be on a VM, so there is no need for the trappings and the pretense of trying to look like a physical computer and the performance overhead that comes with that.

Deduplication for VDI deployments

If you want to run Windows 8.1 in a virtual desktop infrastructure (VDI) environment, with scores of hosts running Windows 8.1 guests in VMs that users log into as their daily drivers, then the Windows Server 2012 R2 deduplication feature can save an enormous amount of space while increasing performance.

Instead of storing multiple copies of Windows 8.1 in your guest VMs’ disk drives, which is simply wasted space, Windows Server 2012 R2 supports deduplication. It reads and then optimizes the files on a volume by storing only one copy of a file and replacing the duplicate copies with “pointers” to that single stored instance.

This deduplication feature supports open VHD or VHDX files. It can deduplicate files on a volume with running VMs so you don’t have to take your VDI farm down to begin using it. While the feature was first introduced in Windows Server 2012 Gold, the performance on the optimizer algorithm has improved, and it completes faster than it did in the previous release.

What’s the result? When the deduplication is finished, Windows can read those optimized files faster than if they were not deduplicated. And with a VDI deployment as opposed to just a typical file server, you can gain space savings up to 90% with minimal effect on performance.

Storage Quality of Service

Network Quality of Service (QoS) allows administrators to define caps for certain types of network traffic to ensure enough bandwidth exists for other activities and no one type of traffic sucks up the entire network pipe. Similarly, Hyper-V in Windows 8.1 and Windows Server 2012 R2 supports storage QoS. This feature lets you restrict disk throughput for overactive and disruptive VMs, which is great if you have a long-running process in one VM and you don’t want the overall I/O performance of your host machine dragged down. Since this feature is dynamically configurable, you can even adjust the QoS settings while the VM is running so you don’t interrupt the workload of a given VM.

Online VHDX resize

Have you ever had a VM that runs out of disk space? If you set up the VM with a fixed-size virtual disk rather than a dynamically expanding disk, this could pose a problem. However, the new Hyper-V in Windows 8.1 and Windows Server 2012 R2 allows you to increase and decrease the size of virtual hard disks of a VM while the VM is running — a hot resize, if you will. The VM in question can be running any guest OS, so there are no limitations to running Windows XP or even Linux with this feature. But the virtual hard disk file must be in the newer format (VHDX) as opposed to the older (more popular) VHD format.

 Enhanced VM connect

Hyper-V in Windows 8.1 and Windows Server 2012 R2 offers what Microsoft calls “enhanced virtual machine connect,” which is the ability to use the remote desktop protocol to connect to a VM — even if the network within the VM is down. Hyper-V uses VMBus, the internal communications channel for connecting VMs to the hypervisor. It then transmits RDP over the VMBus independent of the network connection. As part of this enhanced mode, you can drag and drop files between the host and the VM, which makes use of the clipboard sharing capabilities, and you can redirect local resources like smart cards, printers and USB devices right over that VMBus connection. This makes it easier to perform troubleshooting tasks and simple administration. This enhanced mode is enabled by default if you’re running Hyper-V on Windows 8.1, but disabled by default if you are running it on Windows Server 2012 R2.

About these ads

Posted in Windows 2012 | Leave a Comment »

SQL Server 2012 contained database from A to Z

Posted by Alin D on July 23, 2013

Of the many features introduced in SQL Server 2012, the SQL Server 2012 contained database could prove one of the most valuable. Unlike the typical SQL Server database, a SQL Server 2012 contained database is one that’s isolated from the SQL Server instance on which it resides, as well as from other databases on that instance. Such isolation makes managing databases easier, particularly when they’re being moved to a different instance or implemented in a failover cluster.

Prior to SQL Server 2012, all SQL Server databases were considered non-contained.Metadata specific to the database resided outside that database. The server’s default collation could impact queries against the database. And user authentication within the database was tied to the server-level logins defined on the instance.

In SQL Server 2012, all databases are still, by default, non-contained. However, you can now configure any non-system database as contained. That way, metadata will reside within the database it describes. In addition, because the collation model has been greatly simplified, all user data and temporary data will use the default database collation, and all other objects (metadata, temporary metadata, variables, among others) will use the catalog collation, which is Latin1_General_100_CI_AS_WS_KS_SC for all contained databases.

The most significant change that the SQL Server 2012 contained database brings is the “contained user,” a user account created specifically for the contained database. The account is not tied to a server-level login and provides access to the contained database only, without granting permission to other databases or to the instance as a whole.

SQL Server supports two types of contained users:

  • Database user with password: A local database account created with a username and password that are authenticated by the database.
  • Windows principal: A local database account based on a Windows user or group account but authenticated by the database.

You can add either one or both account types to a contained database. In fact, you can still add login-based accounts as well. That’s because SQL Server 2012 supports what are referred to as “partially contained databases,” rather than fully contained ones.

In a fully SQL Server 2012 contained database, no dependencies, such as a Service Broker route or login-based user account, can exist outside the database. But a partially contained database can support both contained and non-contained elements. That means, for example, that you can provide access to the database either through login-based accounts, contained user accounts or both. However, you can still achieve the equivalent of a fully contained database by eliminating any non-contained elements. (Whether SQL Server will eventually support fully contained databases is yet to be seen.)

The SQL Server 2012 contained database

After you’ve isolated your SQL Server 2012 contained database, you can easily move it from one SQL Server instance to another, without having to move a set of SQL Server logins. The contained database stores all the information it needs within that database. This process also makes it easier to set up your high-availability clusters. Because users connect directly to the database, they can easily connect to a second database if failover occurs.

Even if you’re not moving or clustering your databases, the SQL Server 2012 contained database can make user account management easier because you’re not trying to administer both SQL Server logins and database user accounts. You grant access to specific users to specific databases, without those users being able to access anything outside them.

Yet all this good news doesn’t come without a few downsides. For example, a contained database cannot use replication, change tracking or change data capture. And a contained database can raise security concerns. For instance, users granted the ALTER ANY USERpermission can create user accounts and grant access to the database. In addition, the password hashes associated with contained user accounts are stored within the database, making them more susceptible to dictionary attacks. Plus, the contained user accounts cannot use Kerberos authentication, which is available only to the SQL Server login accounts that use Windows Authentication.

Despite their limitations, if contained databases are carefully implemented, you can sidestep some of the security issues and reap the benefits that database isolation provides. The SQL Server contained database offers a level of portability and manageability not seen before in SQL Server. Moving databases is easier. Failover is easier. Managing access is easier. Indeed, the contained database feature in SQL Server 2012 could prove beneficial for any organization looking to streamline its operations.

Configuring and implementing a SQL Server contained database

Before you can configure a SQL Server contained database, you must enable containment on your instance of SQL Server 2012. To do so, run the sp_configure system stored procedure and set the contained databaseauthentication option to 1, as shown in the following T-SQL script:

EXEC sp_configure ‘contained database authentication’, 1;
GO

RECONFIGURE;
GO

As you can see, you must also run the RECONFIGURE statement for your setting to be implemented. Once you’ve done so, you’re ready to set up a SQL Server 2012 contained database. In your database definition, include the CONTAINMENT clause and set the containment type to PARTIAL, as shown in the following example:

USE master;
GO

CREATE DATABASE ContainedDB
CONTAINMENT = PARTIAL;
GO

You can just as easily include the CONTAINMENT clause in an ALTERDATABASEstatement. In either case, once you’ve set up the database to be contained, you’re ready to add a contained user. In the following T-SQL script, the CREATE USER statement defines the user1 account and assigns a password and default schema:

USE ContainedDB;
GO

CREATE USER user1
WITH PASSWORD = N’PaSSw()rd’,
DEFAULT_SCHEMA = dbo;
GO

EXEC sp_addrolemember ‘db_owner’, ‘user1′
GO

Notice that the CREATE USER statement is followed by the sp_addrolememberstored procedure, which assigns the user to the db_owner role.

As the examples have demonstrated, you need to take only three steps to set up a SQL Server 2012 contained database: Enable containment, configure the database for containment, and create the contained user. And if you don’t want to script these settings, you can instead use the SQL Server Management Studio (SSMS)interface to configure them.

After you set up your contained environment, you should try to connect to the contained database by using the contained user to test the connection. In Object Explorer in SSMS, click the Connectbutton and then click Database Engine. When the Connect to Server dialog box appears, enter the instance name and credentials for the contained user.

If you click the Connect button at this point, you’ll receive an error. That’s because you must first specify the target database when connecting with a contained user account. (The database must be part of the connection string.) To add the database, click theOptions button and type the name of the target database into theConnect to database text box.

Now when you click Connect, you’ll be connected to the specified instance, with access only to the contained database.  Notice that the user and database names are included with the instance name. Also notice that only the ContainedDB database is included in the list of databases and that user1 is listed as one of the user accounts.

When working with contained databases, you’ll often want to ensure they’re as fully contained as possible. SQL Server 2012 provides two handy tools for working with containment:

  • Sys.dm_db_uncontained_entities: A system view that lists any noncontained objects in the database. You can use this view to determine what items to address to ensure your database is as contained as possible.
  • Sp_migrate_user_to_contained: A system stored procedure that converts a login-based user to a contained user. The stored procedure removes any dependencies between the database user and the login accounts.

By using these tools, you can achieve a status of full containment, making it easier to manage the database going forward.

Posted in SQL, TUTORIALS | Tagged: | Leave a Comment »

What you don`t know about Hyper-V virtual Switches

Posted by Alin D on April 6, 2013

One of the most significant improvement in Windows 2012 is the presence of a virtual switch at no additional costs. Below you can find some things you might not know about the extensible switch.

Replacement the virtual switch within Windows Server 2012 Hyper-V with a Cisco switch

Perhaps replace isn’t the right word, but you can certainly augment the virtual switch to the point of complete transformation. Cisco is offering the Nexus 1000V virtual switch to install alongside the virtual switch in Windows Server 2012, turning it into a fully managed, standards-compliant switch with a console — one that even supports software-defined networking (SDN) and the Cisco Open Network Environment. You can do this with competitor VMware, but at an additional cost; you get this capability built into the underlying operating system license with Hyper-V.

There are three supported types of extensibility with the switch

hird parties and in-house development teams can create these switch extensions to extend the functionality of the switch, like Cisco did. You can create capturing extensions that read and inspect traffic but are unable to modify or drop packets. You also can create filtering extensions that inspect and read traffic, drop, insert and modify packets directly into the transmissions stream; firewall extensions for the virtual switch typically won’t use this type of filter. And finally, you can create forwarding extensions that define the destination of packets to different places, as well as capture and filter traffic. The capabilities of each type of extension build on one another.

The extensible switch supports access control lists via ports

This is really useful in multi-tenant deployments, where there are hosted virtual machines (VMs) for a variety of clients on the same set of machines, or for organizations with Chinese firewall-type regulations that require data and access segregation. These companies can now use the same type of security right in the Hyper-V virtual network that has been possible in physical switches and network security devices. The Hyper-V virtual switch can filter port traffic based on IP addresses or ranges or via MAC addresses to identify the specific virtual network interface cards involved and ensure that networks are isolated. This also works with the isolated or private VLAN feature that lets the administrator set up isolated communities of tenants by securing traffic over individual VLANs within the virtual network.

There are trunking tools new to Windows that exist within the Hyper-V virtual switch

There is a set of traffic-routing capabilities that can run within a VM — making it like an appliance — as a switch extension (as previously described) or as a service on the hypervisor host. The designated monitoring port copies traffic to the specified VM. When you set the “trunk mode” on a given virtual switch port, all traffic on the virtual network is routed to that VM, making it sit “in front” of the traffic. Traffic is then distributed to other VMs. You can also create a capture extension instance that copies the traffic to a given service for other types of inspection or analysis, and you can set up another extension to tunnel traffic to another network destination as well.

You can manage the Hyper-V extensible virtual switch as an independent device from within System Center 2012

If you have deployed System Center 2012 Service Pack 1, you can add a virtual switch extension manager right to the Virtual Machine Manager console to monitor and manage the settings, features and capabilities of your VMs and the switch from within a single console. You can also do this with other virtual switch extension vendors like Cisco, but you need to first obtain provider software from the vendor, install it on the Virtual Machine Manager server and restart the service.

Posted in Windows 2012 | Leave a Comment »

Windows 2012 Cluster what you must know

Posted by Alin D on March 24, 2013

Like it or not, Active Directory is a vital component of Windows Failover Clusters and can adversely affect its stability. Have you ever experienced the dreaded NETLOGON event, indicating that no domain controllers could be contacted, so your cluster fails to start? How about being unable to create a cluster or virtual servers due to restrictive organizational unit permissions? Microsoft has recognized these and other common AD problems and made significant efforts to fix these shortcomings in Windows Server 2012.

Cluster startup without Active Directory

Perhaps one of the most catastrophic events a cluster can face is when it can’t contact a domain controller (DC) during formation. A different scenario leading to this same problem occurs when you attempt to virtualize your DCs as virtual machines in a Windows failover cluster. The cluster must contact a DC to start, but the virtual DC can’t start until the cluster does. This reliance on AD for a cluster to form has been eliminated in Windows Server 2012.

You’ll need to create a Windows Server 2012 cluster by contacting a DC and storing its authentication data in AD, along with any cluster members, for this function to work. Then existing clusters can start up without having to first contact a DC for authentication. Prior to Windows Server 2012, cluster startup was supported, although not recommended, to run the AD Services role on cluster members to make them more resilient to AD connectivity issues. It is no longer necessary, nor is it supported to run domain controllers as cluster nodes as Microsoft documents in KB 281662.

Flexible OU administration

Another AD shortcoming that has been addressed in Windows Server 2012 is the ability to specify in which organizational units (OU) the computer objects for the cluster will be created. In the past, when a cluster was created, the Cluster Name Object (CNO) was created in the default Computers container in the OU where the cluster members reside. This prevented admins from delegating control to specific OUs for the purpose of managing Failover Clusters without going through painful prestaging efforts.

In Windows Server 2012, both the Create Cluster Wizard and the PowerShell cmdlet New-Cluster allow you to specify in which OU the CNO will be created. In addition, any Virtual Computer Objects (VCO) for the network names associated with highly available cluster roles will be created in the same OU. The user account that creates the cluster must have the Create Computer Objects permission in the specified OU. In turn, the newly created CNO must have Create Computer Objects permission to create VCOs. You can move all these computer objects to a different OU at any point — without disrupting the cluster. Keep in mind the custom OU must be created before you create your cluster.

Unfortunately, the syntax for specifying a particular OU where the CNO should be created isn’t intuitive in the Create Cluster Wizard or the corresponding PowerShell New-Cluster cmdlet. The Create Cluster Wizard will create appropriate syntax for specifying the distinguished name of the cluster, along with the OU where it will reside. In our example, the name of the cluster is Win2012Clus1, and its CNO will be created in the ClusterServers OU in the fictitious windows-scripting.info domain.

Next, look at the syntax for creating a cluster using the PowerShell New-Cluster cmdlet. In this example, the command creates a cluster with the name Win2012Cluster1, placing the CNO in the ClusterServers OU in the fictitious domain using a static IP address of 192.168.0.252.

fter you create the Windows failover cluster, use Active Directory Users and Computers to view and manage the new CNO placed in the custom OU called ClusterServers. Any new cluster roles that are configured will create their VCO in the same OU.

Additional cluster and Active Directory enhancements

With Windows Server 2012, you can have a failover cluster located in a remote branch office or behind a DMZ with a Read-Only Domain Controller (RODC). While the CNO and VCOs must be created beforehand on a RWDC as described by Microsoft, the server supports the configuration.

Finally, AD Cluster computer objects are now created with the Protect Object from Accidental Deletion flag to ensure automated stale object scripts don’t delete them. If the account that creates the cluster doesn’t have this right for the OU, it will still create the object, but won’t protect it from accidental deletion. A system event ID 1222 will be logged to alert you, and you can follow Microsoft KB 2770582 to fix the issue.

Microsoft has taken several steps in Windows Server 2012 to address the AD pitfalls that Windows failover clusters have endured over the years. Some of the top integration enhancements include booting clusters without AD, more flexible OU administration, support for clusters in Branch Offices and DMZs with RODCs and protecting cluster AD objects from deletion.

 

Posted in Windows 2012 | Leave a Comment »

What you need to know about Deduplication in Windows Server 2012

Posted by Alin D on March 8, 2013

Talk to most administrators about deduplication and the usual response is: Why? Disk space is getting cheaper all the time, with I/O speeds ramping up along with it. The discussion often ends there with a shrug.

But the problem isn’t how much you’re storing or how fast you can get to it. The problem is whether the improvements in storage per gigabyte or I/O throughputs are being outpaced by the amount of data being stored in your organization. The more we can store, the more we do store. And while deduplication is not a magic bullet, it is one of many strategies that can be used to cut into data storage demands.

Microsoft added a deduplication subsystem feature in Windows Server 2012, which provides a way to perform deduplication on all volumes managed by a given instance of Windows Server. Instead of relegating deduplication duty to a piece of hardware or a software layer, it’s done in the OS on both a block and file level — meaning that many kinds of data (such as multiple instances of a virtual machine) can be successfully deduplicated with minimal overhead.

If you plan to implement Windows Server 2012 deduplication technology, be sure you understand these seven points:

1. Deduplication is not enabled by default

Don’t upgrade to Windows Server 2012 and expect to see space savings automatically appear. Deduplication is treated as a file-and-storage service feature, rather than a core OS component. To that end, you must enable it and manually configure it in Server Roles | File And Storage Services | File and iSCSI Services. Once enabled, it also needs to be configured on a volume-by-volume basis.

2. Deduplication won’t burden the system

Microsoft put a fair amount of thought into setting up deduplication so it has a small system footprint and can run even on servers that have a heavy load. Here are three reasons why:

a. Content is only deduplicated after n number of days, with n being 5 by default, but this is user-configurable. This time delay keeps the deduplicator from trying to process content that is currently and aggressively being used or from processing files as they’re being written to disk (which would constitute a major performance hit).

b. Deduplication can be constrained by directory or file type. If you want to exclude certain kinds of files or folders from deduplication, you can specify those as well.

c. The deduplication process is self-throttling and can be run at varying priority levels. You can set the actual deduplication process to run at low priority and it will pause itself if the system is under heavy load. You can also set a window of time for the deduplicator to run at full speed, during off-hours, for example.

This way, with a little admin oversight, deduplication can be put into place on even a busy server and not impact its performance.

3. Deduplicated volumes are ‘atomic units’

‘Atomic units’ mean that all of the deduplication information about a given volume is kept on that volume, so it can be moved without injury to another system that supports deduplication. If you move it to a system that doesn’t have deduplication, you’ll only be able to see the nondeduplicated files. The best rule is not to move a deduplicated volume unless it’s to another Windows Server 2012 machine.

4. Deduplication works with BranchCache

If you have a branch server also running deduplication, it shares data about deduped files with the central server and thus cuts down on the amount of data needed to be sent between the two.

5. Backing up deduplicated volumes can be tricky

A block-based backup solution — e.g., a disk-image backup method — should work as-is and will preserve all deduplication data.

File-based backups will also work, but they won’t preserve deduplication data unless they’re dedupe-aware. They’ll back up everything in its original, discrete, undeduplicated form. What’s more, this means backup media should be large enough to hold the undeduplicated data as well.

The native Windows Server Backup solution is dedupe-aware, although any third-party backup products for Windows Server 2012 should be checked to see if deduplication awareness is either present or being added in a future revision.

6. More is better when it comes to cores and memory

Microsoft recommends devoting at least one CPU core and 350 MB of free memory to process one volume at a time, with around 100 GB of storage processed in an hour (without interruptions) or around 2 TB a day. The more parallelism you have to spare, the more volumes you can simultaneously process.

7. Deduplication mileage may vary

Microsoft has crunched its own numbers and found that the nature of the deployment affected the amount of space savings. Multiple OS instances on virtual hard disks (VHDs) exhibited a great deal of savings because of the amount of redundant material between them; user folders, less so.

In its rundown of what are good and bad candidates for deduping, Microsoft notes that live Exchange Server databases are actually poor candidates. This sounds counterintuitive; you’d think an Exchange mailbox database might have a lot of redundant data in it. But the constantly changing nature of data (messages being moved, deleted, created, etc.) offsets the gains in throughput and storage savings made by deduplication. However, an Exchange Server backup volume is a better candidate since it changes less often and can be deduplicated without visibly slowing things down.

How much you actually get from deduplication in your particular setting is the real test for whether to use it. Therefore, it’s best to start provisionally, perhaps on a staging server where you can set the “crawl rate” for deduplication as high as needed, see how much space savings you get with your data and then establish a schedule for performing deduplication on your own live servers.

Posted in Windows 2012 | Tagged: , | Leave a Comment »

Dynamic Witness improves Windows 2012 Cluster High Availability

Posted by Alin D on February 21, 2013

Determined to make Windows Failover Clusters as resilient as possible, Microsoft has once again made significant improvements to its quorum mechanisms in Windows Server 2012. The Dynamic Quorum Management option allows the cluster to dynamically adjust the quorum (or majority) of votes required for the cluster to continue running. This prevents the loss of quorum when nodes fail or shut down sequentially, allowing the cluster to continue running with less than a majority of active nodes.

In addition to dynamic quorum, multisite geoclusters now benefit from the ability to specify which nodes receive votes and which ones don’t. This allows you to bias a particular site (e.g., the primary site) to have the controlling votes, or nodes, to maintain quorum. This also prevents a split-brain scenario from occurring as the secondary site tries to update the cluster database when the primary site is down

Configuring Dynamic Quorum in Windows Server 2012

The principle behind quorum in a failover cluster environment is to ensure that only a majority of nodes can form and participate in a cluster. This prevents a second subset of nodes from forming a separate cluster that can access the same shared resources in an uncoordinated fashion, which can lead to corruption. When nodes are shut down or fail, there are fewer active nodes remaining to maintain the static quorum value of votes needed for the cluster to function. The new Dynamic Quorum Management dynamically adjusts the votes of remaining active nodes to ensure that quorum can be maintained in the event of yet another node failure or shutdown.

There are a few requirements that must be met before the Dynamic Quorum mechanism kicks in. First, Dynamic Quorum must be enabled, which it is, by default, in Windows Server 2012. The Failover Cluster Manager can be used to view or modify the Dynamic Quorum option by running the Configure Cluster Quorum Wizard. Start the wizard by highlighting the cluster in the left-hand pane, right-clicking on it, selecting More Actions and then choosing Configure Cluster Quorum Settings.

ConfigureWitness

The Quorum Wizard prompts you to select from several different quorum configurations depending on your environment (Typical, Add/Change or Advanced). By default, the cluster will use the typical settings for your configuration to establish the quorum management options. You can also add or change the quorum witness if one was selected during the installation process.

To view or change the Dynamic Quorum Management option, use the Advanced quorum configuration option, as seen above. Stepping through the Quorum Wizard, it will prompt you to Configure Quorum Management. This is where you can view or change the Dynamic Quorum option.

Alow Dynamic Manage

You can also view or modify the cluster’s Dynamic Quorum setting by using PowerShell cmdlets. The first cmdlet, Get-Cluster, as shown in below, reveals the current Dynamic Quorum setting (0=disabled, 1=enabled). You can then use PowerShell to enable Dynamic Quorum by establishing the variable $cluster with Get-Cluster and then set the property DynamicQuorum to a value of 1.

With Dynamic Quorum enabled, the next condition that must be met is that the cluster must be up and running and currently sustaining quorum based on the initial cluster configuration. The final condition for Dynamic Quorum to work is that any subsequent node failures or shutdowns must be experienced sequentially — not with multiple nodes going down at the same time. A lengthier cluster regroup operation would occur if multiple nodes exited the cluster simultaneously.

Dynamic Wight

You can use PowerShell to view the number of votes and observe the inner workings of the Dynamic Quorum mechanism. By default, each server in the cluster gets a single vote, or NodeWeight. When Dynamic Quorum is enabled, an additional property called DynamicWeight is used to track a server’s dynamic vote toward quorum. The cluster will adjust a node’s dynamic weight to zero, if necessary, to avoid losing quorum, should another node exit the cluster. The PowerShell cmdlet reveals the NodeWeight and DynamicWeight for a two-node cluster.

PowerShell cmdlet Get-ClusterNode

Dynamic Quorum allows cluster nodes to be individually shut down or fail to the point where just a single node is left functioning (“last man standing”). Just as quorum is dynamically adjusted downward as nodes fail or are shut down in the cluster, quorum is adjusted upward as nodes are rebooted back into the cluster.

Using weighted votes to assign nodes

The other major enhancement to the quorum mechanism in Windows Server 2012 is the ability to specify which nodes in a cluster receive a vote. As mentioned, all nodes receive a vote that contributes toward quorum by default. In multisite geocluster configurations, it may be beneficial to give nodes in the primary site a vote to ensure they keep running in the event of a network failure between sites. Nodes in the secondary site can be configured with zero votes so they cannot form a cluster.

You can use the Quorum Wizard (Advanced Quorum Configurations) to configure whether a node receives a vote. The wizard also allows you to see how Node1 is given a vote and Node2 is not.

Quorum Wizard

 

Alternatively, you can use PowerShell to specify whether a node receives a vote. Use the Get-ClusterNode cmdlet to set the NodeWeight for Node2 back to 1 so that it receives a vote.

Windows Server 2012 has made significant improvements to the quorum mechanism, resulting in more resilient Failover Clusters. Dynamic Quorum Management takes the worry out of whether enough servers are active to achieve or maintain quorum if systems should fail or shut down. Multisite geoclusters also use weighted votes to specify which primary site should continue running in the event of intersite network failures.

Posted in TUTORIALS, Windows 2012 | 1 Comment »

Unsed features in SQL Server 2012

Posted by Alin D on February 6, 2013

After more than 25 years of working with Microsoft SQL Server, you’d think pretty much everything has been done at least once. I thought it would be a challenge to find anything surprising in a product with roots going back to the mid-1980s. But there have recently been two pretty major changes in SQL Server. Columnstore Indexes and the Hekaton in-memory enhancements offer massive, game-changing improvements in performance great enough to be called a surprise.

Columnstore Indexes

Columnstore Indexes were bundled with Microsoft SQL Server 2012 borrowing on techniques originally developed for the PowerPivot in-memory store. Columnstore changes the way that rows are stored; instead of traditional row-by-row storage, data is stored one column at a time in a new layout that bunches up around a million column values in one large blob structure. This structure allows for incredible data compression.

A new method of processing Microsoft refers to as fast batch mode also speeds up query processing in SQL Server 2012. As Dr. David Dewitt explained at SQL Pass in 2010, the closeness of the successive columns values works well with the nature of modern CPUs by minimizing the data movement between levels of cache and the CPU.

There is, however, one big limitation to the current implementation of Columnstore Indexes. They are read-only, which means that the tables they index will also be read-only. Any table that has a Columnstore Index will not allow any inserts, updates or deletes. To change the data, the Columnstore Index has to be dropped, the necessary changes made and the Columnstore Index rebuilt. This isn’t the kind of operation that’s friendly to an online transaction processing (OLTP) system, which is what makes it solely a data warehousing, online analytical processing (OLAP) feature. It also puts a premium on partitioning on any table with a Columnstore Index. In the next major release of SQL Server, Microsoft is promising enhancements that lift the updatability restriction and also allow the Columnstore to be the clustered index.

I’ve had a chance to try out a Columnstore Index on a few tables. What I’ve found is that it works great when the nature of the query dovetails with the Columnstore. As a rule of thumb, the more columns in the table, the better the results. This is because SQL Server can avoid reading a large part of the index. In other situations, such as one narrow entity-attribute-value table that I work with frequently, the results are mixed. Summary queries that aggregate are much faster, to the tune of three seconds instead of 300, but queries that return all the columns of a small set of rows aren’t helped at all. I’ll be using Columnstore Indexes a lot looking for the 100 times speed improvements.

Hekaton

While Columnstore Indexes make data warehouse applications faster, Hekaton is intended for the other end of the application spectrum: high-volume OLTP systems. Websites, exchanges, manufacturing systems and order-entry systems that execute large numbers of, usually small, transactions are Hekaton’s target. The Hekaton extensions to SQL Server are what is known as an “in-memory” database, but Microsoft has combined several technologies to pump up transaction volume up to 50 times above what could previously be achieved. Hekaton will be included in the next release of SQL Server, which is not yet scheduled for shipment.

Hekaton starts with tables that are stored on disk but are pulled completely into system RAM.  This means will be limited to smaller tables or require a design that separates data with high activity from historical data. This requirement works well with the obvious server trend towards larger and larger amounts of RAM. It’s not uncommon to work with servers with 500 gigabytes or up to two terabytes of RAM. That’s plenty of room for the active data in most applications. The changes don’t stop with the approach to storage.

Code in a Hekaton system is written in good old T-SQL, just like we’ve used for years. But unlike traditional T-SQL, Hekaton code is compiled to native machine code and there’s no interpreter. T-SQL is great for data manipulation, but zipping through business logic isn’t one of its strengths; native compilation should speed things up significantly.

As servers have gained more cores, which are SQL Server’s mechanism for synchronizing data access, contention issues will arise as the system scales up. Hekaton bypasses these issues by implementing its own locking mechanism based on optimistic transactions that are optimized for an in-memory database. This allows many transactions to run simultaneously. However, the ability to mix Hekaton tables and other tables in structures such as a join may be limited. There will be other restrictions as well.

By combining the in-memory data handling, compiled code, and new concurrency control mechanism, the preliminary benchmarks for Hekaton look very promising. At SQL PASS 2012 I saw the development team demonstrate a 25-times throughput improvement in transaction volume. That’s 25 times– not just 25%. These are the kinds of surprising changes still in the cards for SQL Server. I’m looking forward to working with SQL Server more in the near future.

Posted in SQL 2012 | Tagged: , , , | Leave a Comment »

Powershell CMDLET for managing Windows Server 2012 Cluster

Posted by Alin D on January 17, 2013

If you manage Windows Failover Clusters, you may notice the Cluster.exe CLI command is missing after you install the Windows Server 2012 Failover Clustering feature. For years, systems administrators have used Cluster.exe to script the creation of clusters, move failover groups, modify resource properties and troubleshoot cluster outages. Yes, the Cluster.exe command still exists in the Remote Server Administration Tools (RSAT), but it’s not installed by default and is considered a thing of the past.

Another thing you may soon notice in Windows Server 2012 is the PowerShell and Server Manager Icons pinned to your taskbar. What you may not notice is that the default installation of the Windows Server 2012 operating system is now Server Core and contains more than 2,300 PowerShell cmdlets. Microsoft is sending a clear message that Windows servers should be managed just like any other data center server, both remotely and through the use of scripting. With Windows, that means PowerShell.

Fortunately, Windows Server Failover Clustering is no stranger to PowerShell. With Windows Server 2008 R2, 69 cluster-related PowerShell cmdlets assist with configuring clusters, groups and resources. This tip explores the new PowerShell cmdlets in Windows Server 2012 failover clusters.

With Windows Server 2012, a total of 81 failover cluster cmdlets can be used to manage components from PowerShell. New cluster cmdlets can perform cluster registry checkpoints for resources (Add-ClusterCheckpoint), monitor virtual machines for events or service failure (Add-ClusterVMMonitoredItem) and configure two new roles: Scale-Out File Servers (Add-ClusterScaleOutFileServerRole) and iSCSI Target Server (Add-ClusteriSCSITargetServerRole).
Windows PowerShell ISE

Windows PowerShell ISE

To list all the failover cluster cmdlets, use the PowerShell cmdlet “Get-command –module FailoverClusters” (Figure 1). I am using the built-in Windows PowerShell Integrated Scripting Environment (ISE) editor, which helps admins get familiar with all the failover clustering cmdlets.

In addition to the FailoverCluster cmdlets, Microsoft has several new modules of PowerShell cmdlets, including ClusterAwareUpdating with 17 new cmdlets, ClusterAware ScheduledTasks with 19 new cmdlets and iSCSITarget with 23 new cmdlets. There are many Cluster Aware Updating cmdlets, such as adding the CAU role (Add-CauClusterRole), getting an update report (Get-CauReport) or invoking a run to scan and install any new updates (Invoke-CauRun).

Cluster-Aware scheduled tasks are new to Windows Server 2012 and the Task Scheduler now integrates with failover clusters. A scheduled task can run in one of three ways:

ClusterWide on all cluster nodes
AnyNode on a random node in the cluster
ResourceSpecific on a node that owns a specific cluster resource

The new ScheduledTasks cmdlets create a cluster-aware scheduled task. In the table, you can see the cmdlets that register, get and set Clustered Scheduled task properties.
PowerShell Cmdlet     Description
Register-ClusteredScheduledTask     Creates a new clustered scheduled task
Unregister-ClusteredScheduledTask     Deletes a clustered scheduled task
Set-ClusteredScheduledTask     Updates existing cluster task
Get-ClusteredScheduleTask     Enumerates existing clustered tasks

To get an idea of how to use these PowerShell cmdlets, you first assign an action and trigger variable. The action variable specifies the program that is to be executed, such as the Windows calculator in the example below. The trigger variable sets up when the task is to be executed. The resulting cmdlets to schedule the task to run cluster-wide daily at 14:00 would look like this:

PS C:\> $action = New-ScheduledTaskAction –Execute C:\Windows\System32\Calc.exe

PS C:\> $trigger = New-ScheduledTaskTrigger -At 14:00 –Daily

PS C:\> Register-ClusteredScheduledTask -Action $action -TaskName ClusterWideCalculator -Description “Runs Calculator cluster wide” -TaskType ClusterWide -Trigger $trigger

TaskName         TaskType
——–         ——–
ClusterWideCa… ClusterWide

PS C:\>
Windows PowerShell Task Scheduler

While only PowerShell can be used to register, get/set and unregister Cluster-Aware scheduled tasks, you can use the Task Scheduler in Computer Management to view the cluster jobs (Figure 2).
iSCSI Target cmdlets
Cmdlets Failover Clusters3

Finally, failover clusters can now be configured with a highly available iSCSI Target Server. This role allows you to create and serve iSCSI LUNs in a highly available fashion to clients across your enterprise. To add this new cluster role, use the Cmdlet Install-WindowsFeature –name FS-iSCSITarget-Server (or use Server Manager) to install the iSCSI Target Server role. Then, use the new cmdlet Add-ClusteriSCSITargetServerRole to create the iSCSI Target resource and associate it with shared storage. You can then leverage the new iSCSI Target cmdlets to configure iSCSI LUNs (Figure 3).

There is no shortage of PowerShell cmdlets in Windows Server 2012 to help you manage your failover clusters. In addition to creating, configuring and troubleshooting your cluster, you can use PowerShell cmdlets to add new scale-out file server, iSCSI Target Server roles, clustered scheduled tasks and Cluster-Aware Updating.

Posted in Windows 2012 | Tagged: , , | Leave a Comment »

High availabitity more easy with Built-in NIC teaming in Windows Server 2012

Posted by Alin D on January 7, 2013

Thinking back a couple of years ago, I remember how painful and expensive the high availability options were for Windows and competing operating systems. Many Windows admins still experience the pain and cost of high availability in their environments, but Microsoft aims to fix this with NIC teaming in Windows Server 2012.

Be it for cloud scenarios or simple in-house setups, Windows Server 2012’s NIC teaming has a lot to offer in such a small package. It’s built right in and extremely simple to configure.

NIC teaming, or load balancing and failover, allows multiple NICs to be teamed together for bandwidth aggregation and failover in the event of a network hardware letdown. Until Windows Server 2012, we were at the mercy of NIC vendors to provide these features. There was no direct OS integration and Microsoft didn’t officially support NIC teaming. In Windows Server 2012, NIC teaming is front and center. It’s built right into the OS.

Some out-of-the-box NIC teaming features include:

  • Support for virtual NICs inside Hyper-V
  • Switch-dependent and switch-independent modes that do or do not require each NIC to be connected to the same Ethernet switch
  • VLAN traffic division so that applications can communicate with different VLANs at once
  • Support for up to 32 NICs per team

The only technologies that do not support NIC teaming in Windows Server 2012 are PCI I/O Virtualization, remote direct memory access and TCP Chimney, which are older technologies.

Configuring NIC teaming is a simple process that involves enabling it, adding a team on the server and configuring the specific network cards you wish to use for each team.

You can do this via PowerShell, the Server Manager GUI and via the RSAT tools in Windows 8. For PowerShell, you have a number of NIC teaming-specific commands such as:

  • NetLbfoTeam (Get, New, Remove, Rename, Set)
  • NetLbfoTeamMember (Add, Get, Remove, Set)
  • NetlbfoTeamNic (Get, New , Remove, Set)

Simply entering Get-Command -Module NetLbfo will show you all the commands available.

In the GUI, configuring a NIC team is a matter of:

  1. Selecting Local Server/Properties/Remote Desktop/NIC Teaming Administration.
  2. Selecting the server name and under Teams/Tasks, selecting New Team.
  3. Entering your team name and selecting the specific network adapters you want to use for the team.

More details about NIC teaming can be found in Microsoft’s document Windows Server 2012 NIC Teaming (LBFO) Deployment and Management.

One thing I’ve witnessed over the years is network admins not taking advantage of IT management and security controls they already have at their disposal. Having been in network administrator shoes, I think this is due in large part to a general lack of time and focus.  NIC teaming is not all that sexy, but it can buy you a ton of server resiliency. Customers, internal auditors and even management will never know you’re using it, but that’s what IT is about anyway: making things look simple by keeping the shop running smoothly. Microsoft is throwing us a bone with NIC teaming. I can’t think of any reason to not take advantage of it in Windows Server 2012.

Posted in TUTORIALS | Tagged: , , | Leave a Comment »

Configure DirectAccess for Secure VPN in windows 2012

Posted by Alin D on December 15, 2012

DirectAccess was one of those features introduced with Windows 7 and Windows Server 2008 R2 and never really caught on. The feature was designed to be a next-generation VPN replacement solution, but it suffered from overwhelming complexity. IT pros practically needed a doctorate in computer science to set it up.

But Microsoft reintroduced the DirectAccess feature with Windows 8. In doing so, they made setup of DirectAccess much easier, and organizations might do well to take a fresh look at DirectAccess.

For those not be familiar with DirectAccess, it is a solution designed to provide mobile users with connectivity to the corporate network. Unlike a VPN, the end user does not have to do anything to initiate the connection. If the user has an Internet connection, they are automatically connected to the corporate network.

DirectAccess offers benefits beyond simplifying the end-user experience. One of the problems that has long plagued IT pros is that of managing remote computers. Laptops need to be updated and backed up just like any other computer. The problem is that mobile users spend a lot of time outside of the office, which makes it difficult for the IT department to perform maintenance on mobile user’s laptops. As a result, many IT pros write elaborate scripts to apply patches or run backups whenever remote users connect to a VPN.

The problem with this approach is that users are often connected to VPNs for relatively short amounts of time. The duration of the user’s session might be inadequate for performing all of the necessary maintenance. DirectAccess can help with this problem because the user is connected to the corporate network any time they have Internet connectivity. Since the amount of time a user’s computer is connected to the Internet is often much longer than the amount of time that the user spends logged into a VPN, automated maintenance tasks are more likely to be completed.

The prerequisites that your DirectAccess server must meet are relatively modest. The server must be joined to a domain, and it must have at least one network adapter that has been configured with a static IP address.

Deploying DirectAccess

Add Remote Access Role

As previously mentioned, DirectAccess is much easier to deploy and configure in Windows Server 2012 than it was in Windows Server 2008 R2. The first step in deploying DirectAccess is to install the Remote Access Role through the Server Manager’s Add Roles and Features Wizard (Figure Above).

hooseDirectAccessAndVPN

 

fter selecting the Remote Access role, click Next three times and you will see a screen asking you which Remote Access role services you want to install. Choose the DirectAccess and VPN (RAS) role services and then complete the Add Roles and Features Wizard by accepting the defaults.

When the Remote Access role and the corresponding role features finish installing, the Server Manager will display a warning icon. Clicking this icon reveals a message indicating that there are post-deployment configuration tasks that must be performed. Click on the Open the Getting Started Wizard link, found in the warning message.

Configure Remote Access

At this point, Windows will launch the Configure Remote Access wizard. Click the Deploy DirectAccess Only link.

Configure Edge Technology

 

After a quick prerequisite check, the wizard will ask you to specify your network topology. The DirectAccess Server can act as an edge device, or it can reside behind an edge device. If you choose to place the DirectAccess Server behind an edge device (such as a NAT firewall), you will need to specify whether the DirectAccess server uses a single NIC or two separate NICs.

Monitor DirectAccess Server

 

After specifying your edge topology, you must enter either the IP address or the fully qualified domain name that clients will use when they connect to the DirectAccess server (figure 5).

Click Finish to complete the wizard. Upon doing so, Windows will display the Remote Access Management Console, which you can use to monitor your DirectAccess Server (figure 6).

Client Requirements

Unfortunately, Windows 8 is the only desktop operating system that is natively compatible with Windows Server 2012′s DirectAccess feature.

Even at that, there are some additional requirements that must be met. Client computers must be equipped with a TPM chip and users will need either a physical or a virtual smart card.

The good news is that Microsoft now supports accessing Windows Server 2012 servers from Windows 7 clients as well. To do so, Windows 7 clients must have version 2.0 of the Microsoft DirectAccess Connectivity Assistant installed, available for download here.

The Windows Server 2012 version of DirectAccess is much easier to deploy and configure than the previous version. DirectAccess’ simplicity and its automated connectivity make it plausible as a VPN replacement.

Posted in Windows 2012 | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 266 other followers

%d bloggers like this: