Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Archive for the ‘TUTORIALS’ Category

Different Microsoft products Tutorials

SQL Server 2012 contained database from A to Z

Posted by Alin D on July 23, 2013

Of the many features introduced in SQL Server 2012, the SQL Server 2012 contained database could prove one of the most valuable. Unlike the typical SQL Server database, a SQL Server 2012 contained database is one that’s isolated from the SQL Server instance on which it resides, as well as from other databases on that instance. Such isolation makes managing databases easier, particularly when they’re being moved to a different instance or implemented in a failover cluster.

Prior to SQL Server 2012, all SQL Server databases were considered non-contained.Metadata specific to the database resided outside that database. The server’s default collation could impact queries against the database. And user authentication within the database was tied to the server-level logins defined on the instance.

In SQL Server 2012, all databases are still, by default, non-contained. However, you can now configure any non-system database as contained. That way, metadata will reside within the database it describes. In addition, because the collation model has been greatly simplified, all user data and temporary data will use the default database collation, and all other objects (metadata, temporary metadata, variables, among others) will use the catalog collation, which is Latin1_General_100_CI_AS_WS_KS_SC for all contained databases.

The most significant change that the SQL Server 2012 contained database brings is the “contained user,” a user account created specifically for the contained database. The account is not tied to a server-level login and provides access to the contained database only, without granting permission to other databases or to the instance as a whole.

SQL Server supports two types of contained users:

  • Database user with password: A local database account created with a username and password that are authenticated by the database.
  • Windows principal: A local database account based on a Windows user or group account but authenticated by the database.

You can add either one or both account types to a contained database. In fact, you can still add login-based accounts as well. That’s because SQL Server 2012 supports what are referred to as “partially contained databases,” rather than fully contained ones.

In a fully SQL Server 2012 contained database, no dependencies, such as a Service Broker route or login-based user account, can exist outside the database. But a partially contained database can support both contained and non-contained elements. That means, for example, that you can provide access to the database either through login-based accounts, contained user accounts or both. However, you can still achieve the equivalent of a fully contained database by eliminating any non-contained elements. (Whether SQL Server will eventually support fully contained databases is yet to be seen.)

The SQL Server 2012 contained database

After you’ve isolated your SQL Server 2012 contained database, you can easily move it from one SQL Server instance to another, without having to move a set of SQL Server logins. The contained database stores all the information it needs within that database. This process also makes it easier to set up your high-availability clusters. Because users connect directly to the database, they can easily connect to a second database if failover occurs.

Even if you’re not moving or clustering your databases, the SQL Server 2012 contained database can make user account management easier because you’re not trying to administer both SQL Server logins and database user accounts. You grant access to specific users to specific databases, without those users being able to access anything outside them.

Yet all this good news doesn’t come without a few downsides. For example, a contained database cannot use replication, change tracking or change data capture. And a contained database can raise security concerns. For instance, users granted the ALTER ANY USERpermission can create user accounts and grant access to the database. In addition, the password hashes associated with contained user accounts are stored within the database, making them more susceptible to dictionary attacks. Plus, the contained user accounts cannot use Kerberos authentication, which is available only to the SQL Server login accounts that use Windows Authentication.

Despite their limitations, if contained databases are carefully implemented, you can sidestep some of the security issues and reap the benefits that database isolation provides. The SQL Server contained database offers a level of portability and manageability not seen before in SQL Server. Moving databases is easier. Failover is easier. Managing access is easier. Indeed, the contained database feature in SQL Server 2012 could prove beneficial for any organization looking to streamline its operations.

Configuring and implementing a SQL Server contained database

Before you can configure a SQL Server contained database, you must enable containment on your instance of SQL Server 2012. To do so, run the sp_configure system stored procedure and set the contained databaseauthentication option to 1, as shown in the following T-SQL script:

EXEC sp_configure ‘contained database authentication’, 1;
GO

RECONFIGURE;
GO

As you can see, you must also run the RECONFIGURE statement for your setting to be implemented. Once you’ve done so, you’re ready to set up a SQL Server 2012 contained database. In your database definition, include the CONTAINMENT clause and set the containment type to PARTIAL, as shown in the following example:

USE master;
GO

CREATE DATABASE ContainedDB
CONTAINMENT = PARTIAL;
GO

You can just as easily include the CONTAINMENT clause in an ALTERDATABASEstatement. In either case, once you’ve set up the database to be contained, you’re ready to add a contained user. In the following T-SQL script, the CREATE USER statement defines the user1 account and assigns a password and default schema:

USE ContainedDB;
GO

CREATE USER user1
WITH PASSWORD = N’PaSSw()rd’,
DEFAULT_SCHEMA = dbo;
GO

EXEC sp_addrolemember ‘db_owner’, ‘user1′
GO

Notice that the CREATE USER statement is followed by the sp_addrolememberstored procedure, which assigns the user to the db_owner role.

As the examples have demonstrated, you need to take only three steps to set up a SQL Server 2012 contained database: Enable containment, configure the database for containment, and create the contained user. And if you don’t want to script these settings, you can instead use the SQL Server Management Studio (SSMS)interface to configure them.

After you set up your contained environment, you should try to connect to the contained database by using the contained user to test the connection. In Object Explorer in SSMS, click the Connectbutton and then click Database Engine. When the Connect to Server dialog box appears, enter the instance name and credentials for the contained user.

If you click the Connect button at this point, you’ll receive an error. That’s because you must first specify the target database when connecting with a contained user account. (The database must be part of the connection string.) To add the database, click theOptions button and type the name of the target database into theConnect to database text box.

Now when you click Connect, you’ll be connected to the specified instance, with access only to the contained database.  Notice that the user and database names are included with the instance name. Also notice that only the ContainedDB database is included in the list of databases and that user1 is listed as one of the user accounts.

When working with contained databases, you’ll often want to ensure they’re as fully contained as possible. SQL Server 2012 provides two handy tools for working with containment:

  • Sys.dm_db_uncontained_entities: A system view that lists any noncontained objects in the database. You can use this view to determine what items to address to ensure your database is as contained as possible.
  • Sp_migrate_user_to_contained: A system stored procedure that converts a login-based user to a contained user. The stored procedure removes any dependencies between the database user and the login accounts.

By using these tools, you can achieve a status of full containment, making it easier to manage the database going forward.

About these ads

Posted in SQL, TUTORIALS | Tagged: | Leave a Comment »

Dynamic Witness improves Windows 2012 Cluster High Availability

Posted by Alin D on February 21, 2013

Determined to make Windows Failover Clusters as resilient as possible, Microsoft has once again made significant improvements to its quorum mechanisms in Windows Server 2012. The Dynamic Quorum Management option allows the cluster to dynamically adjust the quorum (or majority) of votes required for the cluster to continue running. This prevents the loss of quorum when nodes fail or shut down sequentially, allowing the cluster to continue running with less than a majority of active nodes.

In addition to dynamic quorum, multisite geoclusters now benefit from the ability to specify which nodes receive votes and which ones don’t. This allows you to bias a particular site (e.g., the primary site) to have the controlling votes, or nodes, to maintain quorum. This also prevents a split-brain scenario from occurring as the secondary site tries to update the cluster database when the primary site is down

Configuring Dynamic Quorum in Windows Server 2012

The principle behind quorum in a failover cluster environment is to ensure that only a majority of nodes can form and participate in a cluster. This prevents a second subset of nodes from forming a separate cluster that can access the same shared resources in an uncoordinated fashion, which can lead to corruption. When nodes are shut down or fail, there are fewer active nodes remaining to maintain the static quorum value of votes needed for the cluster to function. The new Dynamic Quorum Management dynamically adjusts the votes of remaining active nodes to ensure that quorum can be maintained in the event of yet another node failure or shutdown.

There are a few requirements that must be met before the Dynamic Quorum mechanism kicks in. First, Dynamic Quorum must be enabled, which it is, by default, in Windows Server 2012. The Failover Cluster Manager can be used to view or modify the Dynamic Quorum option by running the Configure Cluster Quorum Wizard. Start the wizard by highlighting the cluster in the left-hand pane, right-clicking on it, selecting More Actions and then choosing Configure Cluster Quorum Settings.

ConfigureWitness

The Quorum Wizard prompts you to select from several different quorum configurations depending on your environment (Typical, Add/Change or Advanced). By default, the cluster will use the typical settings for your configuration to establish the quorum management options. You can also add or change the quorum witness if one was selected during the installation process.

To view or change the Dynamic Quorum Management option, use the Advanced quorum configuration option, as seen above. Stepping through the Quorum Wizard, it will prompt you to Configure Quorum Management. This is where you can view or change the Dynamic Quorum option.

Alow Dynamic Manage

You can also view or modify the cluster’s Dynamic Quorum setting by using PowerShell cmdlets. The first cmdlet, Get-Cluster, as shown in below, reveals the current Dynamic Quorum setting (0=disabled, 1=enabled). You can then use PowerShell to enable Dynamic Quorum by establishing the variable $cluster with Get-Cluster and then set the property DynamicQuorum to a value of 1.

With Dynamic Quorum enabled, the next condition that must be met is that the cluster must be up and running and currently sustaining quorum based on the initial cluster configuration. The final condition for Dynamic Quorum to work is that any subsequent node failures or shutdowns must be experienced sequentially — not with multiple nodes going down at the same time. A lengthier cluster regroup operation would occur if multiple nodes exited the cluster simultaneously.

Dynamic Wight

You can use PowerShell to view the number of votes and observe the inner workings of the Dynamic Quorum mechanism. By default, each server in the cluster gets a single vote, or NodeWeight. When Dynamic Quorum is enabled, an additional property called DynamicWeight is used to track a server’s dynamic vote toward quorum. The cluster will adjust a node’s dynamic weight to zero, if necessary, to avoid losing quorum, should another node exit the cluster. The PowerShell cmdlet reveals the NodeWeight and DynamicWeight for a two-node cluster.

PowerShell cmdlet Get-ClusterNode

Dynamic Quorum allows cluster nodes to be individually shut down or fail to the point where just a single node is left functioning (“last man standing”). Just as quorum is dynamically adjusted downward as nodes fail or are shut down in the cluster, quorum is adjusted upward as nodes are rebooted back into the cluster.

Using weighted votes to assign nodes

The other major enhancement to the quorum mechanism in Windows Server 2012 is the ability to specify which nodes in a cluster receive a vote. As mentioned, all nodes receive a vote that contributes toward quorum by default. In multisite geocluster configurations, it may be beneficial to give nodes in the primary site a vote to ensure they keep running in the event of a network failure between sites. Nodes in the secondary site can be configured with zero votes so they cannot form a cluster.

You can use the Quorum Wizard (Advanced Quorum Configurations) to configure whether a node receives a vote. The wizard also allows you to see how Node1 is given a vote and Node2 is not.

Quorum Wizard

 

Alternatively, you can use PowerShell to specify whether a node receives a vote. Use the Get-ClusterNode cmdlet to set the NodeWeight for Node2 back to 1 so that it receives a vote.

Windows Server 2012 has made significant improvements to the quorum mechanism, resulting in more resilient Failover Clusters. Dynamic Quorum Management takes the worry out of whether enough servers are active to achieve or maintain quorum if systems should fail or shut down. Multisite geoclusters also use weighted votes to specify which primary site should continue running in the event of intersite network failures.

Posted in TUTORIALS, Windows 2012 | 1 Comment »

High availabitity more easy with Built-in NIC teaming in Windows Server 2012

Posted by Alin D on January 7, 2013

Thinking back a couple of years ago, I remember how painful and expensive the high availability options were for Windows and competing operating systems. Many Windows admins still experience the pain and cost of high availability in their environments, but Microsoft aims to fix this with NIC teaming in Windows Server 2012.

Be it for cloud scenarios or simple in-house setups, Windows Server 2012’s NIC teaming has a lot to offer in such a small package. It’s built right in and extremely simple to configure.

NIC teaming, or load balancing and failover, allows multiple NICs to be teamed together for bandwidth aggregation and failover in the event of a network hardware letdown. Until Windows Server 2012, we were at the mercy of NIC vendors to provide these features. There was no direct OS integration and Microsoft didn’t officially support NIC teaming. In Windows Server 2012, NIC teaming is front and center. It’s built right into the OS.

Some out-of-the-box NIC teaming features include:

  • Support for virtual NICs inside Hyper-V
  • Switch-dependent and switch-independent modes that do or do not require each NIC to be connected to the same Ethernet switch
  • VLAN traffic division so that applications can communicate with different VLANs at once
  • Support for up to 32 NICs per team

The only technologies that do not support NIC teaming in Windows Server 2012 are PCI I/O Virtualization, remote direct memory access and TCP Chimney, which are older technologies.

Configuring NIC teaming is a simple process that involves enabling it, adding a team on the server and configuring the specific network cards you wish to use for each team.

You can do this via PowerShell, the Server Manager GUI and via the RSAT tools in Windows 8. For PowerShell, you have a number of NIC teaming-specific commands such as:

  • NetLbfoTeam (Get, New, Remove, Rename, Set)
  • NetLbfoTeamMember (Add, Get, Remove, Set)
  • NetlbfoTeamNic (Get, New , Remove, Set)

Simply entering Get-Command -Module NetLbfo will show you all the commands available.

In the GUI, configuring a NIC team is a matter of:

  1. Selecting Local Server/Properties/Remote Desktop/NIC Teaming Administration.
  2. Selecting the server name and under Teams/Tasks, selecting New Team.
  3. Entering your team name and selecting the specific network adapters you want to use for the team.

More details about NIC teaming can be found in Microsoft’s document Windows Server 2012 NIC Teaming (LBFO) Deployment and Management.

One thing I’ve witnessed over the years is network admins not taking advantage of IT management and security controls they already have at their disposal. Having been in network administrator shoes, I think this is due in large part to a general lack of time and focus.  NIC teaming is not all that sexy, but it can buy you a ton of server resiliency. Customers, internal auditors and even management will never know you’re using it, but that’s what IT is about anyway: making things look simple by keeping the shop running smoothly. Microsoft is throwing us a bone with NIC teaming. I can’t think of any reason to not take advantage of it in Windows Server 2012.

Posted in TUTORIALS | Tagged: , , | Leave a Comment »

Active Directory Recycle Bin

Posted by Alin D on November 11, 2012

When your Active Directory forest is operating in the Windows Server 2008 R2 or higher mode, you can use the Active Directory Recycle Bin. The Active Directory Recycle Bin adds an easy-to-use recovery feature for Active Directory objects. When you enable this feature, all link-valued and non-link-valued attributes of a deleted object are preserved, allowing you to restore the object to the same state it was in before it was deleted. You can also recover objects from the recycle bin without hav- ing to initiate an authoritative restore. This differs substantially from the previously available technique, which used an authoritative restore to recover deleted objects from the Deleted Objects container. Previously, when you deleted an object, most of its non-link-valued attributes were cleared and all of its link-valued attributes were removed, which meant that although you could recover a deleted object, it was not restored to its previous state.

Preparing Schema for the Recycle Bin

Before you can make the recycle bin available, you must update Active Directory schema with the required recycle bin attributes. You do this by by preparing the forest and domain for the Windows Server 2008 R2 functional level or higher. When you do this, the schema is updated, and then every object in the forest is updated with the recycle bin attributes as well. This process is irreversible once it is started.

After you prepare Active Directory, you need to upgrade all domain control- lers in your Active Directory forest to Windows Server 2008 R2 or higher and then raise the domain and forest functional levels to the Windows Server 2008 R2 level or higher. Optionally, you can update Active Directory schema in your forests and domains for Windows Server 2012 to enable the enhanced recycle bin.

After these operations, you can enable and access the recycle bin. Once Recycle Bin has been enabled, it cannot be disabled. Now when an Active Directory object is deleted, the object is put in a state referred to as logically deleted and moved to the Deleted Objects container. Also, its distinguished name is altered. A deleted object remains in the Deleted Objects container for the period of time set in the deleted object lifetime value, which is 180 days by default.

NOTE: The msDS-deletedObjectLifetime attribute replaces the tombstone- Lifetime attribute. However, when msDS-deletedObjectLifetime is set to $null, the lifetime value comes from the tombstoneLifetime. If the tombstoneLifetime is also set to $null, the default value is 180 days.

 

Recovering Deleted Objects

If you elect not to use the recycle bin, you can still recover deleted objects from the Deleted Objects container by using an authoritative restore and other techniques I’ll discuss in this section. The procedure has not changed from previous releases
of Windows Server. What has changed, however, is that the objects are restored to their previous state with all link-valued and non-link-valued attributes preserved. To perform an authoritative restore, the domain controller must be in Directory Services Restore Mode.

Rather than using an authoritative restore and taking a domain controller offline, you can recover deleted objects by using the Ldp.exe administration tool or the Ac- tive Directory cmdlets for Windows PowerShell. If you updated the Active Directory schema in your forests and domains for Windows Server 2012, you also can enable the enhanced recycle bin, which allows you to recover deleted objects using Active Directory Administrative Center.

Keep in mind that Active Directory blocks access to an object for a short while after it is deleted. During this time, Active Directory processes the object’s link-value table to maintain referential integrity on the linked attribute’s values. Active Direc- tory then permits access to the deleted object.

Using Ldp.exe for Basic Recovery

You can use Ldp.exe to display the Deleted Objects container and recover a deleted object by following these steps:

1. Type Ldp.exe in the Apps Search box, and then press Enter.

2. On the Options menu, tap or click Controls. In the Controls dialog box, select 
Return Deleted Objects in the Load Predefined list, and then tap or click OK.

3. Bind to the server that hosts the forest root domain by choosing Bind from the Connection menu. Select the Bind type, and then tap or click OK.

4. On the View menu, tap or click Tree. In the Tree View dialog box, use the BaseDN list to select the appropriate forest root domain name, such as DC=windows-scripting,DC=org, and then tap or click OK.

5. In the console tree, double-tap or double-click the root distinguished name and locate the CN=Deleted Objects container.

6. Locate and press and hold or right-click the Active Directory object you want to restore, and then tap or click Modify. This displays the Modify dialog box.

7. In the Edit Entry Attribute text box, type isDeleted. Do not enter anything in the Values text box.

8. Under Operation, tap or click Delete, and then tap or click Enter.

9. In the Edit Entry Attribute text box, type distinguishedName. In Values, 
type the original distinguished name of this Active Directory object.

  1. Under Operation, tap or click Replace. Select the Extended check box, tap or click Enter, and then tap or click Run.

Using Windows PowerShell for Basic and Advanced Recovery

The Active Directory cmdlets for Windows PowerShell allow you to recover deleted objects using scripts or by typing commands at a PowerShell prompt. You use Get- ADObject to retrieve the object or objects you want to restore, pass that object or objects to Restore-ADObject, and then Restore-ADObject restores the object or objects to the directory database.

NOTE The Active Directory module is not imported into Windows PowerShell by de- fault. Import the Active Directory module by typing import-module activedirectory at the PowerShell prompt. For more information, see “Active Directory Administrative Center and Windows PowerShell” in Chapter 7.

To use the Active Directory cmdlets for recovery, you need to open an elevated, administrator PowerShell prompt by pressing and holding or right-clicking the Windows PowerShell entry on the menu and tapping or clicking Run As Administra- tor. The basic syntax for recovering an object is as follows:

Get-ADObject -Filter {ObjectId} -IncludeDeletedObjects | Restore-ADObject

ObjectId is a filter value that identifies the object you want to restore. For ex- ample, you could restore a deleted user account by display name or SAM account name as shown in these examples:

Get-ADObject -Filter {DisplayName -eq "Rich Tuppy"} -IncludeDeletedObjects | Restore-ADObject

Get-ADObject -Filter {SamAccountName -eq “richt”} –IncludeDeletedObjects | Restore-ADObject

Note that nested objects must be recovered from the highest level of the deleted hierarchy to a live parent container. For example, if you accidentally deleted an OU and all its related accounts, you need to restore the OU before you can restore the related accounts.

The basic syntax for restoring container objects such as an OU is as follows:

Get-ADObject -ldapFilter:”(msDS-LastKnownRDN=ContainerID)” –IncludeDeletedObjects | Restore-ADObject

ContainerID is a filter value that identifies the container object you want to restore. For example, you could restore the Corporate Services OU as shown in this example:

Get-ADObject -ldapFilter:”(msDS-LastKnownRDN=Corporate_Services)”

–IncludeDeletedObjects | Restore-ADObject

If the OU contains accounts you also want to restore, you can now restore the ac- counts by using the technique discussed previously, or you can restore all accounts at the same time. The basic syntax requires that you establish a search base and associate the accounts with their last known parent, as shown here:

Get-ADObject -SearchBase “CN=Deleted Objects,ForestRootDN” -Filter {lastKnownParent -eq “ContainerCN,ForestRootDN“} -IncludeDeletedObjects | Restore-ADObject

ForestRootDN is the distinguished name of the forest root domain, such as DC=windows-scripting,DC=org, and ContainerCN is the common name of the container, such as OU=Corporate_Services or CN=Users. The following example restores all the ac- counts that were in the Corporate Services OU when it was deleted:

Get-ADObject -SearchBase “CN=Deleted Objects,DC=Cpandl,DC=com” –Filter

{lastKnownParent -eq “OU=Corporate_Services,DC=windows-scripting,DC=org”}

-IncludeDeletedObjects | Restore-ADObject

Using the Enhanced Recycle Bin for Recovery

The enhanced recycle bin makes recovering deleted objects as easy as pointing and clicking or tapping and holding. Once you updated the Active Directory schema
in your forests and domains for Windows Server 2012, you enable the enhanced recycle bin for use by following these steps:

1. In Active Directory Administrative Center, the local domain is opened for management by default. If you want to work with a different domain, tap or click Manage and then tap or click Add Navigation Nodes. In the Add Navigation Nodes dialog box, select the domain you want to work with and then tap or click OK.

  1. Select the domain you want to work with by tapping or clicking it in the left pane. In the Tasks pane, tap or click Enable Recycle Bin and then tap or click OK in the confirmation dialog box.
  2. Active Directory will begin replicating the change to all domain controllers in the forest. Once the change is replicated, the enhanced recycle bin will be available for use. If you then tap or click Refresh in Active Directory Adminis- trative Center, you’ll see that a Deleted Object container is now available for domains using the enhanced recycle bin.

Keep in mind that the enhanced recycle bin is a forestwide option. When you enable this option in one domain of a forest, Active Directory replicates the change to all domain controllers in all domains of the forest.

With the enhanced recycle bin enabled, you can recover deleted objects with ease. In Active Directory Administrative Center, domains using the enhanced recycle bin will have a Deleted Object container. In this container, you’ll see a list of deleted objects. As discussed previously, deleted objects remain in this container for the deleted object lifetime value, which is 180 days by default.

Each deleted object is listed by name, when it was deleted, the last known par- ent, and the type. When you select a deleted object by tapping or clicking it, you can use the options in the Tasks pane to work with it. The Restore option restores the object to its original container. For example, if the object was deleted from the Users container, it is restored to this container.

The Restore To option restores the object to an alternate container within its original domain or a different domain within the current forest. Specify the alternate container in the Restore To dialog box. For example, if the object was deleted from the Users container in the tech.windows-scripting.org domain, you could restore it to the Devs OU in the eng.windows-scripting.org domain.

Posted in TUTORIALS | Tagged: , | Leave a Comment »

Windows 2012 New Group policies features

Posted by Alin D on October 14, 2012

As you know Group Policy, GPO, is an important part of Windows infrastructure that allows system admins to manage multiple systems with a centralized policy. System admins can use the Group Policy Management Console that is installed when installing the AD DS role to manage Group Policy for the whole domain. So here are and improved features of Group Policy that is included in Windows Server 2012.

Enhanced Group Policy Results reports
Reports for Group Policy Results are enhanced and includes info to investigate if GPO setting is applied to a user or computer and if the results doesn’t match. A few new attributes included are

  • Connection detection – slow or fast
  • Block inheritance detection
  • Group Policy processing time detection
  • Group Policy loop back detection

Infrastructure Status Information Detection
This allows IT administrators to troubleshoot SYSVOL replication issues with Group Policy. To use this feature, open Group Policy Management Console > Click on the domain > Under the “Status” tab >  Click the “Detect Now” button.

Remote force Group Policy Update
This feature allows IT admins to remotely refresh Group Policy settings from Group Policy Management Console. Before this feature was introduced, administrator will usually have to visit every computer to run the gpupdate.exe /force command to refresh the GPO. Another option is to wait until GPO refresh interval has passed but this can be up to an hour depending on current settings. This new functionality simply runs “gpupdate.exe /force” on all the computer in the OU. To use the Remote force Group Policy update feature, open Group Policy Management Console > Right Click on the OU you wish to update > Group Policy Update.

Posted in TUTORIALS | Leave a Comment »

Cloud Computing with Hyper-V 3.0

Posted by Alin D on October 6, 2012

Organizations anxious to reap the benefits of cloud computing but reluctant to give up control of critical resources are building private clouds. Deploying a private cloud provides cloud-like functionality in a secure on-premises environment. But when it comes to actually building that private cloud, many administrators are left scratching their heads.

One problem is a private cloud seems to mean something different to almost everyone. So, the first step in building a private cloud is to define some goals and expected functionality for your environment. After doing so, you can determine how to use Microsoft Hyper-V to reach those goals.

A private cloud must possess the following three characteristics or capabilities:

  1. A private cloud should treat server hardware as a pool of shared resources.
  2. A good private cloud should include a self-service function, meaning an authorized end user can request resources and deploy preconfigured virtual machines (VMs) with minimal IT involvement.
  3. A private cloud should provide administrators with a way to track which resources they are using. Using chargeback or showback is useful for capacity planning and to track costs.

f you consider these three primary characteristics of private cloud, then Hyper-V 3.0 does not build a private cloud; the software does not include an automated process. Still, it’s possible to build a fully functional private cloud based on Hyper-V.

Resource pooling in a private cloud

The first requirement of building a private cloud involves treating physical server hardware as a pool of resources an admin can dynamically provision. Hyper-V 3.0 actually makes it relatively easy to meet this. Here are some examples of how Hyper-V can pool these resources:

Hyper-V 3.0 separates the Startup Memory setting from the Minimum Memory setting, allowing some of the startup memory to be reclaimed once the VM becomes idle. And this enables far greater VM density. On the flip side, Hyper-V 3.0 allows for NUMA spanning — a single VM can access memory from multiple NUMA nodes. In this instance, the VM can access more memory than would be otherwise possible.

  • Network: VMs connect to a virtual network rather than attaching directly to a physical network. This virtual network is based on the use of a virtual switch that usually connects to a physical network interface card (NIC).
    In Hyper-V 3.0, the virtual switch is extensible, which is useful for network management and monitoring. Hyper-V can make use of virtual LANs to isolate certain types of network traffic to a dedicated virtual network. At the physical level, multiple NICs can be teamed together to form a single logical NIC. This logical NIC is fault tolerant and provides higher network bandwidth than would be possible using a single physical NIC.
  • Storage: Hyper-V has always supported the use of thinly provisioned virtual hard drives, but combining Hyper-V 3.0 with Windows Server 2012 makes it possible to virtualize physical storage. Windows Server 2012 offers a new feature called Storage Spaces that allows you to add multiple physical hard disks to a storage pool. This storage pool can provide the required fault tolerance and capacity to the entire virtualization infrastructure.
  • Memory: The concept of dynamic memory was first introduced in Hyper-V 2.0, but has been enhanced in Hyper-V 3.0. In Hyper-V 2.0 the Minimum Memory setting had to meet the amount of memory a VM required at startup. However, VMs often consume more memory at startup than they do when in an idle state.
    Hyper-V 3.0 separates the Startup Memory setting from the Minimum Memory setting, allowing some of the startup memory to be reclaimed once the VM becomes idle. And this enables far greater VM density. On the flip side, Hyper-V 3.0 allows for NUMA spanning — a single VM can access memory from multiple NUMA nodes. In this instance, the VM can access more memory than would be otherwise possible.

Microsoft Hyper-V 3.0 and self-service provisioning

The second requirement to building a private cloud is that authorized users should be able to request and provision resources with minimal IT involvement. This functionality is not built into Hyper-V, however; Microsoft offers an add-on for System Center Virtual Machine Manager called the Self Service Portal.

The Self Service Portal acts as a Web interface that automatically deploys preconfigured VM user requests. You will also need Microsoft’s Deployment Toolkit, which helps you create VM images users can automatically deploy using the Self Service Portal.

Tracking resource use with the Self Service Portal

The final requirement of a private cloud is the ability to track resource consumption. The Self Service Portal in Hyper-V 3.0 includes a chargeback mechanism. This mechanism lets you specify a price for various resources and allocate a cost per user or a cost per department based on the resources consumed when a user requests a VM.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Hyper-V 3.0 high availability and redundancy

Posted by Alin D on August 31, 2012

Microsoft has taken major steps toward improving Hyper-V 3.0’s high availability capabilities with the addition of predictive failure analysis and increased redundancy.

IT administrators face the critical task of ensuring the integrity and availability of network servers, the importance of which only increases with virtualization. Prior to the advent of server virtualization, a server failure typically affected only a single workload. A failed virtual host, however, may affect dozens of workloads — resulting in a major outage.

Given the importance of high availability in virtual data centers, Microsoft Hyper-V 3.0 includes new features to spot potential errors and add redundancy.

New: Hyper-V 3.0 predictive failure analysis
Predictive failure analysis is a major improvement to Hyper-V 3.0. It allows the Windows Server 8 operating system to support native handling of error correction codes (ECC), which should reduce application downtime.

With ECC support, the OS system memory manager monitors memory pages and the takes the page off-line if the error count exceeds a predefined threshold. It also adds the page to the persistent bad page list, so it won’t be used again.

With Hyper-V 3.0, when Windows identifies a bad memory page, Hyper-V momentarily suspends all virtual machines. If the operating system can isolate the error to a single virtual machine, it shuts down the VM, marks the memory page as bad and restarts the VM. If the OS cannot trace the bad memory page to a single VM,  it resumes all virtual machines. In this case, the potential remains for a fatal error to occur if the page is accessed later.

Improved: Hyper-V 3.0 redundancy technologies
Microsoft also added redundancy to almost every level of Hyper-V 3.0 architecture. The previous version of Hyper-V offers two forms of node redundancy: live migration for planned downtime and failover clustering for unplanned downtime.  This functionality now supports Hyper-V 3.0’s larger clusters.

To ensure failures do not occur as a result of storage I/O problems, Hyper-V 3.0 includes an I/O redundancy feature through network interface card (NIC) teaming.  With this OS feature, administrators can combine multiple network adapters, providing additional bandwidth, load-balancing and failover capabilities.

Previously, Hyper-V NIC teaming was only possible with proprietary  hardware. With native, OS-level NIC teaming, it’s possible to mix and match NICs from different vendors and still ensure that if a single NIC fails, communication can continue through the remaining NICs. Additionally, Hyper-V 3.0 offers multichannel server message block (SMB) and  multipath I/O, which give the server more than one channel through which to communicate with the storage.

More: Hyper-V 3.0 replication capabilities
Hyper-V 3.0 also enables virtual machine (VM) replication, synchronously through integration with storage arrays, or asynchronously through the hypervisor.

In terms of high availability, both replication capabilities create working copies of virtual machines, which you can use if there’s an outage. Though reliable, synchronous replication is susceptible to network latency and is generally considered suitable only when a high bandwidth connection is available between two data centers located in close proximity. Conversely, asynchronous replication isn’t as sensitive to network latency offers much better performance, but it does have the potential for a small amount of data loss.

Hyper-V 3.0’s asynchronous replication capabilities promise to be a boon to cost-sensitive IT shops. Today, it is possible to create host server and virtual machine replicas at the storage level, but this hardware-based approach tends to be expensive and is not application aware. On the other hand, Hyper-V 3.0 replication will create application-consistent virtual machine replicas without the need for additional expensive hardware. This capability assists VMs running applications like Exchange Server because it allows the underlying databases to remain in a consistent state.

The Hyper-V 3.0 replication process also allows you to perform the initial replication either online, over the network, or offline, if you have a slow wide area network. The offline replication process entails the copying the VMs, shipping them to the remote site, loading them on your server and then replicating any changes that have occurred since the copies were originally made. This option reduces replication time for large VMs.

In addition, the replication process supports Windows-integrated and certificate-based authentication, which allow two hosts to mutually authenticate each other’s identity. The replication data also can be compressed and encrypt, which is essential to performance and security.

Overall, Hyper-V 3.0’s predictive failure analysis and redundancy will help ensure high availability –which, in turn, should reduce virtual machine and application downtime.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Use Event Tracing for Windows to extract data from Windows crush dump

Posted by Alin D on August 30, 2012

Troubleshooting Windows Server hangs might be one of the toughest challenges a system administrator faces. When a server starts to hang, things can quickly go from bad to worse. Often, it is too late to set up counter logs to diagnose the problem in Microsoft’s Performance Monitor, more commonly referred to as Perfmon, or to use Task Manager to catch the culprit in the act. The server seems to freeze without any sign of what caused the problem, and you hit the reset button praying it will reboot.

Sound familiar?

What if, just like an airplane’s flight recorder, also known as the black box, you could replay the last few seconds of the server’s performance just prior to the lock-up?

This article describes how to use two of my favorite troubleshooting techniques, namely crash dump analysis and Event Tracing for Windows (ETW), to determine what caused your server to hang.

Event Trace Sessions

The secret is the built-in Event Trace Sessions that Windows has provided since Vista and Windows Server 2008. One of these trace sessions is known as the Circular Kernel Context Logger, or CKCL for short. It provides a 2 MB circular buffer that continually tracks kernel performance statistics in memory.

It is possible to extract this buffer from a forced memory dump and reveal the last few seconds of kernel activity. Extracting the buffer extends the usefulness of a crash dump and provides a snapshot of the server at the time of the hang that includes a history of the last few seconds.

To enable the CKCL, you must select the kernel providers you want included in your trace. This can be accomplished by starting Computer Management or Perfmon to display Data Collector Sets, as seen below in Figure 1. You will then find Startup Event Trace Sessions, which lists the built-in event trace sessions, including the CKCL.

Next, you need to display the properties for the CKCL trace session by double-clicking it or right-clicking to select properties. On the Trace Providers tab, highlight the property called Keywords(Any) and click Edit… to select the providers you want to trace (e.g., process, thread, file).

Perfmon Tacking Fig 1

 

Finally, on the Trace Session tab, select the Enabled checkbox.

Once you acknowledge the changes, you can right-click the CKCL trace session to select Start As Event Trace Session. This will start the CKCL trace session and list it under Event Trace Sessions, along with the other built-in sessions, all of which show a status of Running.

To automate the process of enabling and starting the CKCL after a reboot, you can use the following example Logman command in a script with the Task Scheduler. Use the Task Scheduler’s Actions tab to specify the script and the Triggers tab to specify on startup:

Logman start “Circular Kernel Context Logger” –p “Circular Kernel Session Provider” (process,thread,img,file,driver) -ets

That’s it. All you need to do now is sit back and wait for the next hang to occur. When it does, use the appropriate keystroke combinations (right Ctrl+ScrollLock twice) or NMI mechanism to manually force a system memory dump. Once the system reboots, you will be able to use the Windows debugger to analyze the memory dump.

Extracting performance data from memory dumps

The magical debugger extension that allows you to extract the Event Tracing for Windows performance data from the dump is called !wmitrace. There are two commands you’ll need to know:

Perfmon Tacking Fig 2

!wmitrace.strdump

!wmitrace.logsave [logger ID] [save location].etl

The first command, !wmitrace.strdump, is used to display all of the Event Trace Sessions running at the time of the forced memory dump. You will see the Circular Kernel Context Logger in addition to several others, each containing a “logger ID” to distinguish it from the rest. As you can see in Figure 2, the !wmitrace.strdump command reveals the CKCL has a logger ID of 0x02.

Perfmon Tacking Fig 3

The command !wmitrace.logsave is then used to extract the ETW performance data from the specified session. In our example, the appropriate command to extract the CKCL buffers into an event trace log (ETL) file would be, as seen in Figure 3:

!wmitrace.logsave  2  c:ckcl.etl

Once the performance data has been extracted, you can immediately leverage the Windows Performance Analyzer (WPA) or XPerf to study the data. As you can see below in Figure 4, WPA reveals potential disk and file utilization issues right before the hang:

Perfmon Tacking Fig 4

Summary

Figuring out what caused a Windows server to hang can be a daunting task. But with the right tools and techniques, you can leverage ETW and the Windows Debugger to extract kernel performance data from system memory dumps. You can then use WPA or XPerf to analyze the performance data to determine what led up to the server hang. Keep in mind that while this article uses the CKCL trace session in the examples, you can create your own ETW trace session with WPR or XPerf specifying additional providers and logging options.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

How to choose deployment of Hyper-V 3.0

Posted by Alin D on August 30, 2012

With new Windows Server 2012 and Hyper-V 3.0 features — such as the SMB 3.0 protocol, share-nothing live migration and SOFS clusters, organizations will have more virtual data center design options.
Choosing which design is the best will largely depend on the size of the company as well as its IT maturity. With that, here’s a look at four ways to deploy Hyper-V in a virtual data center.

1. Single-server Hyper-V 3.0 deployment

A single-server Hyper-V configuration has always been easy to install and maintain. You can install Hyper-V in a few mouse clicks onto any hardware supported by Microsoft Windows. Storage, networking and other configurations require minimal upfront thinking and few design decisions.

The changes to the Server Message Block (SMB) 3.0 protocol in Windows Server 2012 add a new twist to this type of virtual data center design. With SMB 3.0, Hyper-V hosts can handle the virtual machine (VM) processing requirements, while offloading the disk activities to remote Windows file servers.

While this option was technically feasible in previous Hyper-V versions, it usually resulted in poor VM performance. Compared to block-level storage technologies, such as iSCSI and Fibre Channel, the SMB protocol was a poor candidate for the remote storage of virtual disks. After extensive changes to the SMB protocol in Windows Server 2012, Microsoft now asserts that, once properly configured, Hyper-V 3.0 VMs hosted atop SMB shares will attain performance levels that equal iSCSI or Fibre Channel.

Assuming this assertion is borne out,  you can now host VHDX files locally or atop a remote Windows Server 2012 file server in single-server Hyper-V environments.

2. A non-clustered, multi-server Hyper-V deployment

Another simple Hyper-V 3.0 design provides increased resources over a single-server infrastructure, and does not require the use of clusters  or shared storage to live migrate VMs or virtual disk files between Hyper-V hosts.

This is made possible thanks to Hyper-V 3.0’s new support for cluster-less and SAN-less Live Migration.  In Windows Server 2012, Hyper-V virtual machines and/or their virtual disk files can be live migrated between any configured Hyper-V hosts.  That means no downtime for moving VMs or their storage between individual Hyper-V hosts.

There is an important caveat to this new live migration functionality: Live migrations outside a Windows Failover Cluster must be triggered proactively, which means that an administrator must initiate the live migration from a console. Reactive migrations are not natively supported, such as those used to ensure VM high availability to protect against host failures.

That said, PowerShell is deeply integrated into Hyper-V 3.0 and Windows Server 2012, so it stands to reason that homegrown or aftermarket solutions that add HA-like support to this configuration may  quickly become available.

3. A non-clustered, multi-server Hyper-V deployment with highly available storage

As you can imagine, one or more non-clustered Hyper-V hosts pointing to a single SMB-based file server is a study in single points of failure. Losing either the Hyper-V host or file server will automatically cause every, running VM to fail.

At the same time, traditional file server clustering atop earlier versions of Windows Failover Clustering will also cause problems. Traditional file-server clustering follows an active-passive model, where the file server is, in fact, a file-services instance that exists on only one cluster node at a time. If the active cluster node fails, all file server and VM processing will stop until the instance is restarted on a surviving cluster node.

This limitation likely drove Microsoft to introduce a new type of active-active file server cluster in Windows Server 2012 called a Scale-Out File Server (SOFS). Combined with the SMB performance improvements, this fault-tolerant file server intends to remove some of the complexity in managing Hyper-V storage. Administrators must still make iSCSI or Fibre Channel connections to remote block-level storage; however, those connections are exposed as common Windows file shares to Hyper-V hosts.

Microsoft designed this file server to exclusively host large files with limited metadata access. This  makes the Scale-Out File Server a good fit for use in database applications or hosting VM disk files. As of now, Microsoft supports SOFS clusters only for Hyper-V and SQL Server.

Microsoft also hopes to offer the SOFS as an alternative to standard SAN solutions. Combined with another new technology, Storage Spaces, which pools storage from various sources, a SOFS cluster supports connections to locally-attached, shared spanning disks via PCI-RAID. For Microsoft’s intended implementation, PCI-RAID is a fairly leading-edge technology, and so it remains an option to keep an eye on.

Ultimately, Microsoft hopes SOFS will reduce the complex configurations associated with storage connections, and time will tell if the new file server technology achieves this goal.

4. Clustered Hyper-V with highly available storage
Hyper-V 3.0 will likely see the greatest adoption in production scenarios outside of small and low-complexity virtual data centers.

A Hyper-V 3.0 cluster of two or more hosts provides high availability and load balancing, when coupled with System Center Virtual Machine Manager, for production workloads. In Windows Server 2012, you have the option to  cluster hosts via traditional storage connections such as iSCSI or Fibre Channel, or through the arguably simpler Windows file share approach facilitated by SOFS.

Microsoft asserts that using SOFS as network-attached storage (NAS) atop a SAN delivers an easier management experience as well as better backup capabilities from the integration between SMB and Microsoft Volume Shadow Copy Service. Like with all these new capabilities, testing the final release of Hyper-V 3.0 will be necessary to validate the added value of SOFS.

Even so, the expanded Hyper-V 3.0 cluster options and live migration capabilities give IT pros more flexibility and virtual data center design options. While most mission-critical and production environments will still require advanced features for maintaining VM availability, small and low-complexity environments now have more architectural  decisions from which to choose.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

Install WSUS server on Hyper-V virtual machine

Posted by Alin D on June 27, 2012

As organizations continue to move away from the use of physical servers, a frequent question arises:   Is it a good idea to virtualize WSUS servers?  Short answer: yes. Read on to find out how to run WSUS in a Hyper-V machine.

Will WSUS run in a virtual machine?

In a word, yes. If you plan on hosting a WSUS virtual machine on Hyper-V, it is generally recommended that you run WSUS on top of the Windows Server 2008 R2 operating system. In order to do that, you will have to deploy WSUS 3 SP2. Until SP2, WSUS did not work properly with Windows Server 2008 R2, and it did not support the management of Windows 7 clients.

What is the easiest way to virtualize a WSUS server?

If you are currently running WSUS 3 on a physical server then I would recommend doing a migration upgrade. To do so, set up a virtualized WSUS server and then configure it to be a replica of your physical WSUS server and then perform synchronization. Once the sync process completes reconfigure the virtual WSUS server to be autonomous. Then, you can decommission your physical WSUS server.

This technique offers two main advantages. First, it makes it easy to upgrade the WSUS server’s operating system if necessary. The other advantage is that this method offers far less down time than a standard P2V conversion because your physical WSUS server continues to service users while your virtual WSUS server is being put into place.

What kind of capacity can I get from a virtualized WSUS server?

A single WSUS server should be able to handle up to 25,000 clients. However, this assumes that sufficient resources have been provisioned and that SQL Server is running on a separate server (physical or virtual). Some organizations have been able to achieve higher capacities by using multiple front-end servers.

What are the options for making WSUS fault-tolerant?

In a physical server environment, WSUS is made fault-tolerant by eliminating any single points of failure. Normally you would create a Network Load Balancing (NLB) cluster to provide high availability for your WSUS servers. Of course WSUS is dependent on SQL Server and the preferred method for making SQL Server fault-tolerant is to build a failover SQL Server cluster.

While it is possible to recreate this high-availability architecture in a Hyper-V infrastructure, it is usually considered to be a better practice to build a Hyper-V cluster instead.  If your host servers are clustered then clustering your WSUS servers and your SQL servers becomes unnecessary (at least from a fault tolerance standpoint).

If Hyper-V hosts are not clustered (and building a Hyper-V cluster is not an option for whatever reason) then I would recommend going ahead and creating a clustered architecture for the virtualized WSUS and SQL servers. However, you should make sure not to place multiple WSUS or SQL servers onto a common Hyper-V server because doing so will undermine the benefits of clustering WSUS and SQL Server.

What do I need in terms of network bandwidth?

There are no predetermined rules for providing network bandwidth to a virtualized WSUS server. Keep in mind, however, that there are a number of different issues that can occur as a result of insufficient bandwidth. If at all possible, I would recommend dedicating a physical network adapter to your virtual WSUS server. If you are forced to share a network adapter across multiple virtual servers then use network monitoring tools to verify that the physical network connection isn’t saturated.

If saturation becomes an issue, remember that WSUS can be throttled either at the server itself or at the client level through the use of group policy settings. You can find client throttling policies in the Group Policy Object Editor at Computer Configuration> Administrative Templates > Network > Background Intelligent Transfer Service.

Are there any special considerations for the SQL database?

It is generally recommended to run SQL Server on a separate machine (physical or virtual) so that you can allocate resources directly to the database server. I also recommend running the Cleanup Wizard and defragmenting the database every couple of months. Doing so will help the database to run optimally, which is important in a virtualized environment.

Another thing to keep in mind is that SQL Servers tend to be I/O intensive. Therefore, if you are planning to virtualize your SQL server then you might consider using dedicated physical storage so that the I/O load generated by SQL does not impact other virtual machines.

Posted in TUTORIALS | Tagged: , , , , , , | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 439 other followers

%d bloggers like this: