Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Archive for the ‘Windows 2008’ Category

How to use Powershell to prepare Active Directory for Exchange 2010 migration

Posted by Alin D on October 17, 2012

Before performing an Exchange Server 2010 migration, you have to make sure Active Directory meets certain prerequisites. Thankfully, there are a number of PowerShell cmdlets to help you prepare your Active Directory forest for the move.

Validating your Active Directory forest
Before deploying Exchange Server 2010, your Active Directory forest must meet several different conditions:

  • There must be at least one global catalog server running Windows Server 2003 SP1 or higher. This server must also exist in the same site that the Exchange server gets installed on.
  • The Active Directory forest must run forest-functional level of Windows Server 2003 or higher.
  • The domain you’re going to install Exchange in must be Windows Server 2003 or higher.
  • The server with the schema master role must run Windows Server 2003 SP1 or higher.

The easiest way to check if Active Directory meets these prerequisites is to open a PowerShell 2.0 session — not an Exchange Management Shell (EMS) session — and enter the following command:

Get-ADForest | Format-List Name, GlobalCatalogs, ForestMode, Domains, SchemaMaster, Sites

After executing this command, you will receive the forest name, the names of the global catalog servers within the forest, the forest-functional level, the names of the domains within the forest, the schema master name and the names of the Active Directory sites. All of this information is important when preparing your Active Directory forest for an Exchange 2010 migration.

Checking the forest-functional level

As you can see in image above, my Active Directory forest is already running at the Windows 2003 forest-functional level. If necessary, you can upgrade the forest-functional level using the following command:

Get-ADForest | Set-ADForestMode –ForestMode Windows2003Forest –Confirm:$True

Checking server versions
Both the Active Directory schema master and at least one global catalog server in each site must run Windows Server 2003 SP1 or higher. The Get-ADForest command revealed the identities of your global catalog servers and the schema master. However, you still need to determine which version of Windows those servers are running.

Enter the command below, but make sure to substitute your server’s NetBIOS name for <servername> and add a dollar sign ($) to the end of your server’s name. Otherwise, the command won’t work.

Get-ADComputer –Filter {SamAccountName –EQ “<server name>$”} –Properties OperatingSystem, OperatingSystemServicePack | Format-List Name, OperatingSystem, OperatingSystemServicePack

About these ads

Posted in Windows 2008 | Leave a Comment »

Use Xperf to granularly determine what function calls are eating up precious resources

Posted by Alin D on January 16, 2012

Monitoring the kernel of the Windows operating system to diagnose performance issues can be a very challenging endeavor. Sure Perfmon, PAL and Xperf can show that the OS is spending x amount of time executing in kernel mode, but how can one determine what portions of the kernel (function calls) are consuming significant amounts of time?

In the past, it was necessary to force multiple crash dumps in an effort to profile where the kernel was spending its time. By forcing a memory dump, the debugger is able to display the stack traces of the current threads to determine where the operating system is executing. This can help identify what kernel functions are being called by which threads in an effort to troubleshoot excessive kernel mode times. There has to be an easier way!

Stack walking

Fortunately, there is an easier way: Stack walking. This feature is part of ETW, Event Tracing for Windows. The Xperf tool is used to enable the stack walking functionality that is built into Windows and collect stack traces for threads that encounter various kernel functions or events. This allows the ability to configure which portions of the kernel collect stack traces for threads. Stack walking is available for primitive events such as process and thread creation, file and registry manipulation, and memory allocation. Issue the command Xperf –help stackwalk to see a list of kernel events (flags) that support stack walking.

For example, imagine the server is experiencing high kernel mode times that  seem to be caused by excessive registry updates. Xperf can reveal which threads are calling which registry functions. Xperf will also summarize the data to show  the percentage of time a particular function was executed, giving it a weight. The higher the weight, the more frequently the function is executed. This enables an admin to profile the kernel functions according to what threads are executing them and their frequency.

For example, the following Xperf flags are available for stack tracing when registry events occur:

RegQueryKey                         RegEnumerateKey                 RegEnumerateValueKey

RegDeleteKey                        RegCreateKey                         RegOpenKey

RegSetValue                           RegDeleteValue                     RegQueryValue

RegQueryMultipleValue        RegSetInformation                 RegFlush

RegKcbCreate                         RegKcbDelete                         RegVirtualize

RegCloseKeystackwalk

A typical series of Xperf commands for tracing registry events might look like:

  1. Start the event collection.
    xperf -on SysProf+REGISTRY –stackwalk

    RegQueryKey+RegEnumerateKey+RegDeleteKey+RegCreateKey+RegOpenKey+RegSetValue+

    RegDeleteValue+RegQueryValue+RegQueryMultipleValue+RegSetInformation+RegFlush+

    RegKcbCreate+RegVirtualize+RegCloseKey

  2. Reproduce the problem.
  3. Stop the collection.

xperf –d stacks.etl

The result will be a file called stacks.etl that is viewable with the Xperf command:

xperf stacks.etl

Configuring and loading symbols
Before call stack information is viewable, it is necessary to establish the symbol path. The symbol path tells Xperf to reference Microsoft’s symbol server on the internet so the tool can lookup module and function names. This allows Xperf to summarize all the call stack information to show which functions are being executed by which threads.

There are several ways to accomplish this. The SET command or the System applet in the control panel can be used to establish the system environment variable_NT_SYMBOL_PATH. Or, establish the symbol path from within Xperf by using the Trace pull-down menu and selecting “Configure Symbol Paths”. Regardless of the process to establish the symbol path, it should point to the following URL:

  _NT_SYMBOL_PATH= srv*C:symbols*http://msdl.microsoft.com/downloads/symbols

After establishing the symbol path, it is necessary to “load” the symbols. Xperf will connect to Microsoft’s symbol server and download any symbol files that are needed to resolve references to module and function names. There are multiple ways to load the symbols: Use the Trace pull-down menu to select “Load Symbols,” or just right-click a graph and choose “Load Symbols” as seen below:

 

Viewing the results


After configuring the symbol path and loading the symbols, view the stackwalk data to find what functions are being called by which processes. One of the charts will be the “CPU Sampling” graph where you can right-click the chart to select “Summary Table.” This will produce a table that summarizes processes and the functions they called related to the ETW providers and stackwalk flags that were specified during the event collection.

In the previous example, we used Xperf to collect data on registry functions, because we hypothetically suspected that was causing our high kernel mode time. That’s not all Xperf can do; it’s just as easy to collect data for any other component of the kernel that supports stackwalk flags such as the file system, process creation and memory allocation. In Figure 2, notice how the regedit.exe process is responsible for the vast majority of registry accesses. You can determine the actual registry functions that were called by expanding the “+” to reveal the call stacks.

While the tool is fairly intuitive to use, there is plenty of help available. For assistance, reference the help file:

  C:Program filesMicrosoft Windows Performance ToolkitWindowsPerformanceToolkit.chm

Troubleshooting kernel mode performance problems is never an easy task. But with tools like Xperf and Event Tracing for Windows, much of the guesswork can be taken out by isolating the issue with kernel stack walking.

Posted in Windows 2008 | Tagged: , , , , | Leave a Comment »

What is Active Directory’s role in the cloud

Posted by Alin D on September 26, 2011

In Windows shops, Active Directory is the authoritative user directory that governs access to email, file sharing, and often a broader set of business applications. So, it’s clear why so many IT managers are skeptical about putting AD into the cloud. Simply put, it’s too strategic an asset.

There is an answer to skeptics however, who do not want to insert AD into the heart of their cloud computing strategies. As Software-as-a-Service (SaaS) applications continue to grow, AD in the cloud can help organizations better apply these software packages to their advantage. So, instead of pushing AD to the cloud, it is replicated. The goal here isn’t to outsource the AD function, but rather to make a cloud-ready copy. The purpose of this becomes clear with the application of the technology.

Assuming a corporation uses Salesforce as its CRM tool, that company can now use a cloud-based AD system to tackle the following challenges:

1. Synchronizing user data – As users are added, removed, or moved in various OUs these changes are then automatically reflected in the SaaS application.

2. Single Sign-On – By allowing users to authenticate against their Windows-based password you remove the need to have additional sets of credentials for a SaaS application.

In fact, Microsoft has taken this a step further with something called Active Directory Federation Service 2.0 (ADFS). ADFS uses a set of secure protocols like SSL and Public Key encryption to provide Single Sign On to applications that are not hosted inside your network. This technology can then be applied to Office 365, SharePoint, Windows Azure and more.

Server cloud computing best practices tips

Cloud computing is relatively new, and the industry is still establishing best practices.  However, the idea of the cloud has been around for some time enabling IT managers to learn from some of the more mature technologies.

  • Licensing:  IT shops can use cloud environments to mitigate spikes or increases in data traffic by spinning up VMs remotely. If your environment uses a cloud product, make sure it can be installed in this fashion. Certain products restrict licenses to be used from a cloud perspective. This is especially true of commercial Grid, HPC or DataGrid vendors.
  • Data Transfer Costs: If you plan on using a cloud provider such as Amazon, make sure you fully understand its  cost model. Make sure the cloud model has a distinct difference between data transferred internally compared to what is transferred externally. In this example, Amazon’s pricing model states that traffic designated as internal is free while anything over an external IP is charged extra.
  • Latency: If the work environment has very precise low-latency requirements, then the cloud may not be the best way to go. Remember, using the Internet to transfer data means anything can happen at any time in multiple locations. The cloud server could go down and it may not even be the fault of the provider or environment. It is absolutely essential to understand performance requirements of the environment and have a clear understanding of what is deemed business critical.
  • Redundancy: IT teams should always make sure that the cloud service it approaches has some sort of redundancy. When a critical application goes down and then is recovered, many times all local changes will be wiped out and the user has to start with a clean slate. To combat this problem, many cloud service providers now offer something called persistent state storage. This means the data being used can remain linked to a specific computing instance at all times.
Cloud computing continues to advance and stabilize.  In the next few years, IT managers will begin to see definitive advantages to offloading some of their server infrastructure to the cloud. Everyone will have different reasons to adopt cloud services including smaller hardware footprint, cost savings, disaster recover, or business expansion. As virtualization and the concept of service-oriented architectures continue to evolve in the datacenter, the ability to run workloads in agile, scalable environments will eventually put every enterprise into the cloud one way or another.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Windows 2008 server core persistent inaccuracies that have limited its widespread adoption

Posted by Alin D on September 19, 2011

Microsoft’s Server Core installation option for Windows Server has been around for a few years now, receiving its latest update in Windows Server 2008 R2. It’s more than likely that this GUI-free version of the server OS will someday become the normal and default installation option.

Microsoft has a lot of work yet to do in order for that to be feasible; in the meantime, it still makes sense to use Server Core whenever and wherever you can. That lets you become more familiar with it, and has some other tangible benefits like reduced patching, smaller memory and disk footprint (great in virtualized environments), and more.

Unfortunately, there are some persistent myths about Server Core that keep a lot of people from wanting to use it. Let’s take the four most serious myths and bust ‘em.

  • Server Core can only run the specific roles that were designed for it. Just not true. You can run anything on Server Core that (a) can be installed, and (b) doesn’t depend on Windows features that aren’t present on Server Core. In R2, Server Core picked up almost all of the .NET Framework, so pretty much any server-grade application should be able to run on Server Core. Installers are actually the big challenge, here, because vendors continue to build highly complex, graphically dependent installers that just won’t run on Server Core. It’s kind of ironic: If it weren’t for those installers, you could probably get a lot more stuff running on Server Core. But, if an installer offers a scripted, “quiet,” or other GUI-free mode, then it should run fine.
  • Server Core can’t run the anti-virus agents, management agents and other tools you need on a server. False. I’m actually not aware of any current-version, server-gradeanti-malware software that won’t install on Server Core. As for management agents, I’ve seen System Center, Tivoli, Landesk, and many others all happily humming away on Server Core. Sure, you won’t get a Notification Area icon since Server Core doesn’t have a Notification Area—or even a Taskbar—but the software will install and run just fine.
  • Server Core is hard to set up and manage. “Set up,” yeah. I’ll partially buy that one. It’s not exactly difficult to configure a new server from the command-line, but not many of us are familiar with all the esoteric commands. That said, a lot of the harder things—like configuring the Windows Firewall—should honestly be done in a Group Policy object anyway. As for managing Server Core on an ongoing basis… well, that’s no problem. Just administer from your Windows 7 client computer, using all of your favorite MMC snap-ins. Even Server Manager, in R2, can connect to and manage a remote computer.
  • Server Core doesn’t run PowerShell. A pure falsehood. In R2, Server Core not only runs PowerShell, it also runs WinRM, meaning that you can remotely connect to PowerShell so that you don’t even have to log on to the server’s console or start an RDP connection. Server Core doesn’t use PowerShell as its default shell; when you log onto the console (or start an RDP connection) you get good old Cmd.exe, but just running “powershell” will start the new shell. It’s likely that PowerShell will become the default console in a future version – at the same time that Microsoft provides a full set of PowerShell cmdlets for basic server configuration tasks.
So there you have it. Server Core is straightforward, and it offers a broad range of benefits. I know organizations that have moved all of their domain controllers, for example, to Server Core – and they’re delighted with the results. So give Server Core another look.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

How to solve lingering object problems in complex Active Directory forests

Posted by Alin D on July 28, 2011

A lingering object is any Active Directory object that has been deleted but gets reanimated when a domain controller (DC) does not replicate the change during the domain’s tombstone lifetime period. Objects can be present on one DC but not on another. This provides an inconsistent view of the Active Directory and often confuses administrators.

The following case study demonstrates how to find and remove lingering objects in a complex forest. The example features a large global forest with an empty parent root domain and three child domains that represent geographies of the world. The child domains are called the Americas, Europe, and Asia and the parent root domain is Corp.com.

In this scenario, when the user account “Ellie” was created, it generated a conflicting (CNF) object; while Ellie’s account worked in the Europe domain, there also was a CNF version of the object. Typically, CNF objects end up in the Lost and Found container and can be deleted. However, this was not the case in this example.

If this occurs, you can use the Find function in the Active Directory Users and Computers (ADUC) snap-in to find the object – just make sure Entire Directory is set as the location.

To specify the location, right-click on the domain, select Find, and in the In: drop-down box, select Entire Directory.

Find-AD-objectsIn this case study, the search results (Figure 2) showed that the object was in the Europe.corp.com domain.

Therefore, from the search we learned:

  1. The Guid is 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d
  2. The object is in the Europe.corp.com domain

Other methods that could have been used to find the object include:

  • The ADFind command:
    adfind -h GCServerName -gcb -f name=*lichfield* -dn
    GCServerName is the name of a Global Catalog.
  • The LDIFDE command:
    Ldifde -f user.ldf -d “dc=europe,dc=Corp,dc=com” -r “(&(objectClass=user)(name=*lichfield*))”

In some cases, it may be possible to delete the object with the free Admod tool from Joeware.net:

admod -b “<GUID=cnf: 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d>” –del

However, the tool did not work in this scenario.

In this case, the next step is to find the DCs the object exists on. Since not all DCs can find the object, use the Repadmin /showobjmeta command to get a list of DCs that know about this object.

The command uses the following GUID:

repadmin /showobjmeta * “<guid =9abcee2e-aff2-459e-b6fd-e4a0ae58f39d>”
The * is the DC list (this command runs on every DC).

This command searches for the object of the specified GUID. If it finds it, it dumps all the attributes; if the object is not found, the DC returns the following error:

repadmin running command /showobjmeta against server WTEC-DC2.Wtec.adapps.hp.com DsReplicaGetInfo() failed with status 8333 (0x208d):
Directory object not found.You can dump the output to a file, then go through the file manually or with findstr and look for some part of the above error message. Eliminate those DCs.Regardless of the method, in the end you will have a list of DCs with the object.

Using the Repadmin/RemoveLingeringObjects command

The next step is the Repadmain /RemoveLingeringObjects command.

The syntax of the command is:

Repadmin/removeLingeringObjects <Dest_DC_LIST> <Source DC GUID> <NC> [/ADVISORY_MODE]

Where:

  • The Dest_DC_List is the one you just made with the /showobjmeta command
  • The Source DC GUID is the DC object GUID of a DC in the domain that has the object and that is up to date in replication. In this case I picked the PDCe.
  • The NC is the naming context to operate on. Use the DN

The command used in this case study is:

Repadmin /removelingeringobjects eu-dc1 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d DC=europe,DC=Corp,DC=com >> error.log

There are some “gotchas” here. First, this command has to run for every DC in every domain in the forest. In this scenario, the command was run four times for DC1. Note that the >>operator was used to append all these to a single log file since it was scripted. This made a nice log file later.

Repadmin /removelingeringobjects eu-dc1 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d
DC=europe,DC=Corp,DC=com >> error.log

Repadmin /removelingeringobjects eu-dc1 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d
DC=americas,DC=Corp,DC=com >> error.log

Repadmin /removelingeringobjects eu-dc1 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d
DC=Asia,DC=Corp,DC=com >> error.log

Repadmin /removelingeringobjects eu-dc1 9abcee2e-aff2-459e-b6fd-e4a0ae58f39d
DC=Europe,DC=Corp,DC=com >> error.log

Since I wasn’t sure that the primary domain controller (PDC) was a good source, I ran the file for each of the DCs in the Europe domain:

Repadmin /removelingeringobjects eu-dc1 b4951358-e44e-41c7-bf86-f2ff1184c66b
DC=europe,DC=corp,DC=com >> error.log

Repadmin /removelingeringobjects eu-dc1 b4951358-e44e-41c7-bf86-
f2ff1184c66bDC=americas,DC=corp,DC=com >> error.log

Repadmin /removelingeringobjects eu-dc1 b4951358-e44e-41c7-bf86-f2ff1184c66b
DC=Asia,DC=corp,DC=com >> error.log

A script was written to create a batch file and avoid lots of typing.

Setting strict replication consistency

To see if the object was removed, run the Repadmin/showobjmeta command and the FIND operation in the ADUC snap-in. In this case study, the object was deleted — but it reappeared three days later.

This meant the object was replicating ahead of the script. Therefore, all the objects weren’t caught and the objects eventually replicated back. Since this was a domain upgraded from Windows 2000, the strict replication consistency registry value was set to “loose”:

HKEY_LOCAL_MACHINESystemCurrentControlSetServicesNTDSParameters ValueName = Strict Replication Consistency
Data Type = Reg_DWORD
Value Data=0

This permits lingering objects to replicate to other DCs. The registry value was set to 1 (strict) and the script was rerun. This successfully deleted the object.

The registry value can be set easily with the following command:

Repadmin /regkey DC_LIST <{+|-}key> [value]

There is only one well-known key, “strict”. The DCList can be specified as *.corp.com to include all DCs in the domain, although this typically only needs to be executed on Global Catalogs.

The following is an example of this command and the output. I specified the forest root domain in the command in my test forest, and it operated on DCs in two child domains plus the parent. DCs include Windows Server 2003, 2008 and 2008 R2.

C:>repadmin /regkey *.wtec.adapps.hp.com +strict

repadmin running command /regkey against server WTEC-DC2.Wtec.adapps.hp.com

DWORD Regkey Value: strict : 1

DWORD Regkey Value: strict : 1

repadmin running command /regkey against server w2k8r2-dc1.w2k8r2.Wtec.adapps.hp.com

DWORD Regkey Value: strict : 1

DWORD Regkey Value: strict : 1

repadmin running command /regkey against server w2k8-dc1.w2k8.Wtec.adapps.hp.com

DWORD Regkey Value: strict : 1

DWORD Regkey Value: strict : 1

Successful running of this command ensures that the strict replication consistency registry value is set to 1 (strict). This prevents lingering objects from replicating between DCs. If you want to enable loose behavior (which is sometimes required in troubleshooting), set the value to -strict:

C:>repadmin /regkey wtec-dc1.wtec.adapps.hp.com –strict

DWORD Regkey Value: strict : 1
New Regkey Value: strict does not exist

When –strict is used, the strict replication consistency value is removed from the registry. The first message states the current setting (set to 1). Here the output states that “strict does not exist.” The second message shows it has been deleted. Running+strict on WTEC-DC1 produces this message:

C:>repadmin /regkey wtec-dc1.wtec.adapps.hp.com +strict

Regkey Value: strict does not exist DWORD Regkey Value: strict : 1

The initial value of the key is “does not exist”, which means it is set to “loose” consistency and will replicate lingering objects. The second message shows it was set to 1 or strict consistency. Therefore, you can run the command and see the current state of this key.

In the initial example, both messages stated the value was strict:1. This means the key was set to strict already, and it set it to 1.

Using LDIFDE to remove lingering objects

In addition to the Repadmin /RemoveLingeringObjects command, it is possible to remove lingering objects with LDIFDE. It is a more complicated method because LDIFDE input files need to be imported in order to make attribute changes.

The LDIFDE command is:

ldifde -i -f %InputFile% -s  >> error.log

The –f option indicates the name of the input (import) file.

The results can be outputted to a log file. In this example, >> was used to append it all to a single file. The input file would look like this:

File name: DeleteLO.ldf

——————————— dn:
changetype: modify
replace: RemoveLingeringObject
RemoveLingeringObject: CN=NTDS Settings,CN=EU-DC1,CN=Servers,CN=Paris, CN=Sites,CN=Configuration,dc=europe,dc=corp,dc=com,

-

——————————

Remember, in an LDIFDE import file there must be a blank space at the end of the instructions, followed by a line with a hyphen. The batch file used in this case study looked like this:

@ECHO off

SET InputFile=DeleteLO.ldf

ldifde -i -f %LDFFile% -s Eu-DC1.europe.corp.com >> error.log

ldifde -i -f %LDFFile% -s EU-DC2.europe.corp.com >> error.log

Of course, an entry is needed for each domain controller in the forest. After running the above script, examine the error log to determine if it was successful on all machines.

While lingering object detection and removal can be frustrating – especially when it comes to finding the object itself – using these basic techniques with the Repadmin tool will help you successfully clean lingering objects from the Active Directory.

Posted in Windows 2008 | Tagged: , , | Leave a Comment »

How to use Disk Cleanup utility in Windows 2008

Posted by Alin D on July 12, 2011

One utility that has proved useful to me in the desktop edition of Windows is the Disk Cleanup tool, also known as cleanmgr. When invoked from a command line or batch script, it lets you perform various automated disk-cleaning operations such as emptying the Recycle Bin or removing temporary files from user directories. Being able to automate these things in Windows Server is handy, but anyone who tries to invoke cleanmgr from a command prompt in Windows Server 2008 will receive the generic “not recognized” error. What’s going on here?

As it turns out, cleanmgr has not been removed entirely from Windows Server 2008. That’s the good news. The bad news is, it is not set up by default. Cleanmgr and a slew of other programs including Windows Media Player (in case you’re wondering why you couldn’t findthat in Windows Server 2008 either), are contained in a set of add-ons collectively labeled the Desktop Experience. This stuff isn’t installed by default in Windows Server since, as the name implies, most of these features revolve around things most commonly done on the desktop.

Some of these features are useful on a server. Disk Cleanup is certainly one of them, but there are a few others including: the Snipping Tool, the Character Map, the aforementioned Windows Media Player and the Sound Recorder. The Character Map, in particular, is one worth keeping handy. For instance, it helps when dealing with user names in Active Directory that have accent marks or other punctuation marks that aren’t accessible from a conventional keyboard.

There’s two ways to install Disk Cleanup in Windows Server 2008. The first, and the more straightforward approach, is to install the Desktop Experience from the Add Features wizard (in Server Manager > Features). Keep in mind that this installs all the Desktop Experience components; there’s no way to individually select which pieces to install. What’s more, installing Desktop Experience automatically triggers the installation of another component you might not want, like the Ink and Handwriting Services subsystem. Adding any component to a server, even one not actually used, increases the attack surface. The less unused features you have hanging around in a server installation the better.

The other way to add Disk Cleanup is to manually copy it out from the Windows installation directory. This approach requires a little more care to be taken because it involves typing out long pathnames. Note also that if you are running the 32-bit version of Windows Server 2008, unlikely as that may be, the files will need to be copied from a slightly different location. Once copied, cleanmgr runs as it normally does, including the use of its command-line switches to store and recall different task settings.

Posted in Windows 2008 | Tagged: , , , , | Leave a Comment »

How to optimize WAN bandwidth by using Windows 7 BrachCashe

Posted by Alin D on June 29, 2011

BranchCache is a new technology in Windows 7 and Windows Server 2008 R2 designed to optimize network bandwidth over slow wide area network links. To reduce WAN use, BranchCache copies documents from the main office to secure repositories on the remote network. As a result, when users at the remote office access files from the home office, the files are served up from the remote network’s cache rather than from the home office across the WAN link.

In the past, users at remote sites frequently clogged their WAN links when accessing large files stored on file servers at the home office. A 5 MB PowerPoint presentation on the shared drive at the home office can become 100 MB of network traffic as 20 people at the remote office each try to view it. With BranchCache, the file is downloaded to the remote office and stored in a local “cache” the first time it’s accessed. Subsequent requests for the same file are served up from that local cache, reducing the network traffic to the home office.

BranchCache is seamless for the end user. A user would launch the file from the home office as usual. The request for the file is sent to the home office file server, where the BranchCache service takes over. If that file has not been previously sent to the remote office, it’s copied and stored in a local cache, but if it has been sent BranchCache redirects the remote office computer to download the file from the existing cache on the remote network. All cached files are automatically encrypted to prevent unauthorized access. (Content is decrypted and delivered to the end user after the New Technology File System’s access control lists have verified that they are allowed to see the data.)

To maintain integrity and ensure users are working from the latest documents, t BranchCache maintains a list of the files that were sent to each remote cache. When a request for a previously cached file is received, the service compares a cryptographic hash of the current file on the server with a hash of the file that was sent to the remote cache. If the hashes don’t match, the document was modified after it was cached. As a result, a new version of the document is sent across the WAN to the remote location’s cache.

The cache location at the remote office can be configured in distributed mode or hosted mode.

The distributed mode is the simplest to set up and configure because it doesn’t require any special servers or software at the remote site. In distributed mode, documents are stored on individual Windows 7 computers at the remote office. The Windows 7 computer that downloads the document first becomes the cache for that document. Other Windows 7 machines that request that document will be referred to the Windows 7 system hosting the cached document. If that computer isn’t online, the new computer will download the file and will become the cache for that document.

Since BranchCache is installed on Windows 7 clients by default, to turn on distributed-mode simply enable the service through Group Policy and select four predefined firewall settings for inbound and outbound discovery and communication. Group Policy settings can also be used to specify the percentage of disk space allowed for the cache as well as the network latency time that defines a remote connection. (By default, connection requests with greater than 80 millisecond latency are considered remote requests and automatically trigger BranchCache functionality, if enabled.)

 Distributed mode uses the WS-Discovery protocol to identify local cache locations

In hosted mode, a Windows Server 2008 R2 system must be present in each remote office location. The specified server is the central cache repository for all documents obtained from the main office. This mode provides higher availability for the cached documents since it’s more likely to be “always on” than a Windows 7 computer in distributed mode. The hosted-mode BranchCache service can live side by side with other applications on a Windows Server 2008 R2 system.

BranchCache functionality helps reduce network traffic over slow WAN links and is intended to increase remote user satisfaction. However, the benefits of BranchCache are available only to Windows 7 Ultimate and Enterprise clients when accessing Server Message Block or HTTP content stored on Windows Server 2008 R2 systems. Perhaps it’s time to upgrade?

Posted in Windows 2008, Windows 7 | Tagged: , , , , , , | Leave a Comment »

How VMware and Microsoft server load-balancing services work

Posted by Alin D on June 27, 2011

Virtual server load balancing among cluster hosts is all about the math. An automated server load-balancing service calculates resource utilization, then compares one host’s available capacity with that of other hosts to determine whether a cluster needs rebalancing.

But it’s not an exact science. Various load-balancing services use different calculation models to determine whether a cluster is balanced. VMware vSphere’s Distributed Resource Scheduler (DRS) feature, for example, uses different metrics than does Microsoft System Center Virtual Machine Manager’s Performance and Resource Optimization (PRO) feature. Ultimately, however, admins need a combination of performance monitoring and calculations before they live-migrate a virtual machine (VM) for load balancing.

Most of us leave cluster load balancing to an automated load-balancing service, but it’s important to understand the calculations that service uses. Understanding these metrics indicates when a load-balancing service should be tuned for better results. Plus, you’re better able to recognize when a vendor’s server load-balancing offering isn’t true load balancing.

 Distributed Resource Scheduler  (DRS)- VMWare Load-balancing Service

The VMware DRS load-balancing service uses two metrics to determine whether a cluster is out of balance. When a host’s current host load standard deviation number is greater than the target host load standard deviation, DRS recognizes that the host is unbalanced with the rest of the cluster. To rebalance the cluster, DRS usually uses vMotion to migrate VMs off an overloaded host.

These server load-balancing metrics reside in the VMware DRS pane inside the vSphere Client. DRS gathers its values by analyzing each host’s CPU and memory resources to determine a load level. Then, the load-balancing service determines an average load level and standard deviation from that average. As long as vSphere is operational, DRS re-evaluates its cluster load every five minutes to check for balance.

If the load-balancing service determines that rebalancing is necessary, DRS prioritizes which virtual machines need to be rebalanced across a cluster. Using the following equation, the service calculates a host’s balance compared with other hosts in the cluster.

The following  equation determines cluster load balancing.

A perfectly balanced cluster reports a zero for its current host load standard deviation. That means the host is balanced with the others in the cluster. If that number increases, it means the VMs on one server require additional resources than the average and that the total resources on the host are unbalanced from the levels on other hosts.

DRS then makes prioritized recommendations to restore balance. Priority-one recommendations should be implemented immediately, while priority-five recommendations won’t do much to fix the imbalance.

Microsoft’s Performance and Resource Optimization – Server load balancing

Microsoft’s System Center Virtual Machine Manager (SCVMM) takes a different approach to cluster load balancing. Natively, it doesn’t take into account aggregate cluster conditions when calculating resource utilization. Its load-balancing service, PRO, considers only overutilization on individual hosts.

You should also note some important conditions with SCVMM. Neither Hyper-V nor SCVMM alone can automatically relocate VMs based on performance conditions. SCVMM can relocate virtual machines only after it has been integrated with System Center Operations Manager (SCOM) and once PRO is enabled. That’s because SCVMM requires SCOM to support VM monitoring.

In SCVMM 2008 R2, if host resources are overloaded, virtual machines can be live-migrated off a cluster host. According to a Microsoft TechNet article, SCVMM recognizes that a host is overloaded when memory utilization is greater than “physical memory on the host minus the host reserve value for memory on the host.” It also recognizes when CPU utilization is greater than “100% minus the host reserve for CPU on the host.”

Neither server load-balancing calculation aggregates metrics throughout the cluster to determine resource balance. But SCVMM uses a per-host rating system that determines where to live-migrate VMs once a host is overloaded. The system uses four resources in its algorithm: CPU, memory, disk I/O capacity and network capacity. You can prioritize these resources with a slider in the SCVMM console.

There’s also an alternative solution for server load balancing: a PowerShell script that analyzes cluster conditions. Running the script balances virtual machines across a cluster by comparing the memory properties of hosts and VMs in the cluster.

Load-balancing services use numerous calculations to determine whether clustered VMs are balanced. But if you don’t understand how your service computes these metrics, server load balancing is tricky. Even if you’re not a math whiz, these metrics help prevent load-balancing problems.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

The role of coordinator node in a Windows 2008 Failover cluster

Posted by Alin D on June 27, 2011

I’m a big fan of Hyper-V but not its underlying clustering technology. The Windows Failover Clustering architecture was originally designed as general-purpose clustering infrastructure, but its management tool sets, such as Windows Failover Cluster coordinator nodes, are hardly a set-it-and-forget-it solution.

Hyper-V’s clustered storage requirements for Live Migration bring the complexity of a Windows Failover Cluster into high relief. For virtual machine (VM) disk file storage, Hyper-V — like its competitors — requires a shared storage area network connection between virtual hosts. Unlike its competitors, however, Hyper-V has more than one shared-storage configuration: with or without Cluster Shared Volumes (CSV).

This article explains how Windows ownership of certain tasks to minimize their time sink.

How Cluster Shared Volumes works


To understand the coordinator node’s role, you must understand CSV — and to grasp CSV, you must also comprehend how a Windows Failover Cluster creates and works with disk resources.

Prior to the release of Windows Server 2008 R2 and CSV, disks that were part of a cluster had to be created as a disk resource within the Failover Cluster Manager console. In this configuration, each disk resource behaved as an individually managed failover unit. Disks, network names, IP addresses, and VM resources were linked together to create a chain of dependencies. When a problem occurred, this linkage allowed each VM resource to simultaneously fail over to another cluster node.

Without CSV, the cluster’s failover unit was the disk resource. Setting the failover boundary as the disk resource failed over the disk data with the disk. Furthermore, any failover of the disk resources also migrated every VM on the same disk. F or most IT shops, this solution was not sufficient. To circumvent this problem, most administrators created a separate disk resource and logical unit number (LUN) for each VM.

CSV eliminates this issue by relocating the unit of failover to the files on a disk instead of to the entire disk. This process enables every VM’s disk file to be the unit of failover. By placing VMs atop a CSV-enabled disk resource, it’s possible to host multiple VM disk files on a single LUN and enjoy the benefits of individual VM failovers.

The role of the coordinator node

At this point, the Window Failover Cluster coordinator node becomes important. In a CSV-enabled configuration, individual files on a disk resource can be owned by different cluster members. But the disk resource that contains these files must also be owned by a cluster member. Microsoft calls the disk resource owner the coordinator node.

You won’t use the coordinator node often. Nearly every VM-to-disk operation occurs from the cluster node that owns the VM directly to its disk. But certain operations must go through a coordinator node, such as copying Virtual Hard Disk (VHD) files to a LUN. This action tends to be disk-intensive, and it can take a long time.

Always copy VHD files to a LUN from the coordinator node. While any cluster node can transparently complete the task, the responsibility is handed off to the coordinator node. As a result, if you initiate the action on a noncoordinator node, it takes more time to complete.

You can alter which server operates as the coordinator node by changing the ownership of the disk resource. Normally, with CSV enabled, you won’t necessarily need to do this task. If you have many file-copy operations, however, save yourself time by using the right node to start your work.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

Types of Trust Relationships in Windows 2008 Active Directory

Posted by Alin D on June 26, 2011

Simply stated, a trust relationship is a configured link that enables a domain to access resources in another domain, or a forest to access resources in another forest. A trust relationship provides such access to users without the need to create additional user accounts in the other forest or domain. Consequently, administrators do not need to configure multiple user accounts, and users do not need to remember multiple usernames and passwords.

This part of article introduces the following types of trust relationships:

Transitive trusts

Forest trusts

External trusts

Realm trusts

Shortcut trusts

 

Transitive Trusts

Microsoft introduced the concept of transitive trusts in Windows 2000. This represented a considerable improvement over the previous Windows NT trusts that required explicitly defining each and every trust relationship, a requirement that could become unwieldy in a large enterprise network.To understand the principle of transitive trusts. In a nontransitive trust, as was the case in Windows NT 4.0, if you configured Domain A to trust Domain B and Domain B to trust Domain C, Domain A does not trust Domain C unless you configure a separate trust relationship. Furthermore, the trust relationship worked in one direction; for a two-way trust relationship, you had to create two separate trusts, one in each direction.

In all versions of Active Directory back to Windows 2000, the default behavior is that all domains in the forest trust each other with two-way transitive trust relationships. Whenever you add a new child domain or a new domain tree to an existing forest, new trust relationships are automatically created with each of the other domains in the forest. These trusts do not require administrative intervention. The other types of trust relationships, which we discuss next, require manual configuration by the administrator.

 

Forest Trusts

A forest trust is used to share resources between forests. This type of trust relationship consists of transitive trusts between every domain in each forest. The trust relationship is created manually and can be either one-way or two-way. The following are several benefits of a forest trust:

They provide simple management of resource sharing by reducing the number of external trusts required in multidomain forests.

They enable a wider scope of user principal name (UPN) authentication across all domains in the trusting forests.

They provide increased administrative flexibility by allowing administrators to collaborate on task delegation across forest boundaries.

Each forest remains isolated in certain aspects, such as directory replication, schema modification, and adding domains, all of which affect only the forest to which they apply.

They improve the trustworthiness of authorization data. You can use both the Kerberos and NTLM authentication protocols when authenticating across forests.

 

External Trusts and Realm Trusts

External trusts are one-way individual trust relationships that you can set up between two domains in different forests. They are nontransitive, which means you use them explicitly to define a one-to-one relationship between domains. You can use them to create trust relationships with AD DS domains operating at the Windows 2000 domain functional level or with Windows NT 4.0 domains. Furthermore, you can use an external trust if you need to create a trust relationship that involves only specific domains within two different forests.

You can use a realm trust to share information between an AD DS domain and any non-Windows realm that supports Kerberos version 5 (V5), such as UNIX. A realm trust supports UNIX identity management to enable users in UNIX realms to seamlessly access Active Directory resources by means of password synchronization with Windows Server 2008’s Server for Network Information Service (NIS) feature. Password synchronization enables users with accounts in UNIX realms in AD DS to synchronize password changes across both the AD DS domain and the UNIX realm. Furthermore, an AD DS domain controller can act as a master NIS server for the UNIX realm.

Shortcut Trusts

Unlike the previously discussed trusts, a shortcut trust relationship exists within a single forest. It is an additional trust relationship between two child domains, which optimizes the authentication process when a large number of users require access to resources in another domain. It is especially useful if the normal authentication path must cross several domains.

Suppose that users in the C.A.A.com domain require access to the C.B.B.com domain, which is located in another tree of the same forest. The authentication path must cross five domain boundaries to access the C.B.B.com domain. If an administrator sets up a shortcut trust between these two domains, the logon process speeds up considerably. This is also true for other possible authentication paths such as B.A.com to B.B.com or even C.A.A.com to B.A.com.

Posted in Windows 2008 | Tagged: , , , , , , | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 441 other followers

%d bloggers like this: