Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Archive for the ‘SQL 2012’ Category

Unsed features in SQL Server 2012

Posted by Alin D on February 6, 2013

After more than 25 years of working with Microsoft SQL Server, you’d think pretty much everything has been done at least once. I thought it would be a challenge to find anything surprising in a product with roots going back to the mid-1980s. But there have recently been two pretty major changes in SQL Server. Columnstore Indexes and the Hekaton in-memory enhancements offer massive, game-changing improvements in performance great enough to be called a surprise.

Columnstore Indexes

Columnstore Indexes were bundled with Microsoft SQL Server 2012 borrowing on techniques originally developed for the PowerPivot in-memory store. Columnstore changes the way that rows are stored; instead of traditional row-by-row storage, data is stored one column at a time in a new layout that bunches up around a million column values in one large blob structure. This structure allows for incredible data compression.

A new method of processing Microsoft refers to as fast batch mode also speeds up query processing in SQL Server 2012. As Dr. David Dewitt explained at SQL Pass in 2010, the closeness of the successive columns values works well with the nature of modern CPUs by minimizing the data movement between levels of cache and the CPU.

There is, however, one big limitation to the current implementation of Columnstore Indexes. They are read-only, which means that the tables they index will also be read-only. Any table that has a Columnstore Index will not allow any inserts, updates or deletes. To change the data, the Columnstore Index has to be dropped, the necessary changes made and the Columnstore Index rebuilt. This isn’t the kind of operation that’s friendly to an online transaction processing (OLTP) system, which is what makes it solely a data warehousing, online analytical processing (OLAP) feature. It also puts a premium on partitioning on any table with a Columnstore Index. In the next major release of SQL Server, Microsoft is promising enhancements that lift the updatability restriction and also allow the Columnstore to be the clustered index.

I’ve had a chance to try out a Columnstore Index on a few tables. What I’ve found is that it works great when the nature of the query dovetails with the Columnstore. As a rule of thumb, the more columns in the table, the better the results. This is because SQL Server can avoid reading a large part of the index. In other situations, such as one narrow entity-attribute-value table that I work with frequently, the results are mixed. Summary queries that aggregate are much faster, to the tune of three seconds instead of 300, but queries that return all the columns of a small set of rows aren’t helped at all. I’ll be using Columnstore Indexes a lot looking for the 100 times speed improvements.

Hekaton

While Columnstore Indexes make data warehouse applications faster, Hekaton is intended for the other end of the application spectrum: high-volume OLTP systems. Websites, exchanges, manufacturing systems and order-entry systems that execute large numbers of, usually small, transactions are Hekaton’s target. The Hekaton extensions to SQL Server are what is known as an “in-memory” database, but Microsoft has combined several technologies to pump up transaction volume up to 50 times above what could previously be achieved. Hekaton will be included in the next release of SQL Server, which is not yet scheduled for shipment.

Hekaton starts with tables that are stored on disk but are pulled completely into system RAM.  This means will be limited to smaller tables or require a design that separates data with high activity from historical data. This requirement works well with the obvious server trend towards larger and larger amounts of RAM. It’s not uncommon to work with servers with 500 gigabytes or up to two terabytes of RAM. That’s plenty of room for the active data in most applications. The changes don’t stop with the approach to storage.

Code in a Hekaton system is written in good old T-SQL, just like we’ve used for years. But unlike traditional T-SQL, Hekaton code is compiled to native machine code and there’s no interpreter. T-SQL is great for data manipulation, but zipping through business logic isn’t one of its strengths; native compilation should speed things up significantly.

As servers have gained more cores, which are SQL Server’s mechanism for synchronizing data access, contention issues will arise as the system scales up. Hekaton bypasses these issues by implementing its own locking mechanism based on optimistic transactions that are optimized for an in-memory database. This allows many transactions to run simultaneously. However, the ability to mix Hekaton tables and other tables in structures such as a join may be limited. There will be other restrictions as well.

By combining the in-memory data handling, compiled code, and new concurrency control mechanism, the preliminary benchmarks for Hekaton look very promising. At SQL PASS 2012 I saw the development team demonstrate a 25-times throughput improvement in transaction volume. That’s 25 times– not just 25%. These are the kinds of surprising changes still in the cards for SQL Server. I’m looking forward to working with SQL Server more in the near future.

About these ads

Posted in SQL 2012 | Tagged: , , , | Leave a Comment »

How to create and configure SQL Server 2012 AlwaysOn

Posted by Alin D on November 27, 2012

One of the better-known features in the release of SQL Server 2012 Enterprise Edition is AlwaysOn. This has been designed to meet the ever-increasing need for ‘High Availability’ (HA). AlwaysOn does not use entirely new technologies but makes more effective use of existing technologies that are tried and tested. It aims to provide more granular control to achieve High Availability. Currently, depending on your environment, you could already be using one or more of the following HA components that existed in previous versions of SQL Server:

  • Single Site Windows Server Failover Clustering
  • Multi-Site Windows Server Failover Clustering
  • San level Block Replication
  • Transaction Log Shipping
  • Database Mirroring
  • Transactional Replication
  • Peer-to-Peer Replication

Some of these can take time and resources to implement, and may therefore not be meeting your current requirements. This is where SQL Server 2012 AlwaysOn can help, because it provides the benefits of:

  • Using the WSFC APIs to perform failovers. Shared storage is not required
  • Utilizing database mirroring for the data transfer over TCP/IP
  • providing a combination of Synchronous and Asynchronous mirroring
  • providing a logical grouping of similar databases via Availability Groups
  • Creating up to four readable secondary replicas
  • Allowing backups to be undertaken on a secondary replica
  • Performing DBCC statements against a secondary replica
  • Employing Built-in Compression & Encryption

I’ll need to explain some of these components of AlwaysOn

Windows Server Failover Clustering (WSFC)

Clustering technology has been around for quite some time, starting with Microsoft Clustering Services (MCS) back in NT 4.0 days.. The technology for WSFC is part of the backbone of AlwaysOn. A WSFC cluster is a group of independent servers that work together to increase the availability of applications and services. It does this by monitoring the health of the active node and failing over to a backup node, with automatic transfer of resource ownership, when problems are detected.

Although the WSFC is able to span multiple subnets, a SQL Server which is cluster-aware has not, until now, been able to support a clustered instance of SQL Server across multiple subnets: It has therefore been quite expensive to set up clustering across multiple data centres due to the WSFC requiring shared storage in both data centres as well as the block level SAN replication. This has required a lot of work with your storage vendors to get your setup correct.

AlwaysOn Nodes

The nodes that you will use in your SQL Server 2012 AlwaysOn solution have to be part of a WSFC. The first step we need to undertake in preparing our AlwaysOn nodes is to add the Failover Cluster Feature to each node. I’ll go into detail later on in this article.

AlwaysOn Storage

SQL Server versions prior to SQL Server 2012, being setup as clustered instance on a WSFC require the storage to be presented as shared storage. This requirement leads to the storage being more expensive and a little bit more complicated to configure and administer. With SQL Server 2012 AlwaysOn your solution does not have to utilise shared storage, but can use SAN, DAS, NAS or Local Disk depending on your budget and requirements. I suggest working with your storage providers to come up with the solution you need.

Availability Groups

SQL Server 2012 AlwaysOn allows for the more granular control of your environment with the introduction of AlwaysOn Availability Groups (AAG’s). AAG’s allow you to configure groups of databases that you would like to failover all together when there is a problem with the host server. When configuring your AAG’s you:

  • Configure your AAG on the Primary Replica (Your AAG contains the group of DBs that you wish to group together to failover to your secondary replicas)
  • You will need to configure between one and four secondary replicas, with any combination of Synchronous (Maximum of two) and Asynchronous Mirroring (Your primary replica is available for read and write connectivity, while your secondary replicas can be configured for read-only, read intent or no access)

Maintenance Tasks/ Reporting

AlwaysOnallows you to use the secondary replicas that you would have created when you setup your AAGs to undertake some regular database maintenance tasks to remove some of the performance overheads from your primary production server. Some of the tasks that you could look at undertaking on a secondary replica are:

  • Database Backups
    • Full Backup With Copy_Only
    • Transaction Log Backups
  • DBCC CheckDB
  • Reporting
  • Database Snapshots

Security & Performance

To give you the full benefits of high availability, there will be a lot of movement of data. This brings with it security risks and higher bandwidth demands. To minimise these requirements Transparent Database Encryption (TDE) as well as Backup Compression, are both shipped with the Enterprise Edition,

Implementing AlwaysOn

Now that we have covered off the basics of what an AlwaysOn solution could possibly look like we are ready to start and plan for implementing this solution to meet your ever increasing High-Availability requirements and DR needs.

Building your AlwaysOn Cluster

In this scenario we are going to build a two-node SQL Server 2012 AlwaysOn Cluster. To achieve this, all of the nodes that are going to participate in the SQL Server AlwaysOn Cluster need to have .NET Framework 3.5.1 and the Failover Clustering feature enabled.

Required features for Failover Cluster

Required features for Failover Cluster

Now that we have enabled both of these features we can build our WSFC. From the Control Panel | Administrative Tools | Failover Cluster Manager | Validate a Configuration, we can validate whether our servers are okay to participate in a WSFC.

Validate Failover Cluster

Validate Failover Cluster

Building your Windows Server Failover Cluster

There is no difference between the task of building your WSFC for use with SQL Server 2012 AlwaysOn and your previously built WSFC for SQL Server 2008 R2. If you have never built a WSFC before, you can read more on this here Failover Cluster Step-By-Step Guide. In this article, I am not going to go through the WSFC build, but I need to mention that your WSFC build needs to pass all of the validation rules in order to give you a supported WSFC.

SQL Server 2012 Setup

Now that we have our two nodes in our WSFC, we are ready to start the build process for our SQL Server 2012 AlwaysOn Cluster. We need to make sure that we have our installation media which is available for download from Microsoft SQL Server 2012 Downloads.

On Node1, we start the setup.exe to begin the installation process. We are greeted with the initial screen. You should navigate to the Installation Tab to start the installation, selecting ‘New SQL Server stand-alone installation or add features to an existing installation’.

Stand Alone Installation

Stand-Alone Installation

Enter your product key, click ‘Next’.

Enter Product Key

Enter Product Key

Accept the Terms and Conditions, click ‘Next’.

Ensure you select ‘SQL Server Feature Installation’, click ‘Next’.

SQL Server Feature Installation

SQL Server Feature Installation

Choose the features you need to install, click ‘Next’.

SQL Features

Recomended SQL Server Features

our installation rules will be checked and, as long as there are no issues, you can continue with the installation by clicking ‘Next’.

Enter your SQL Server 2012 Instance Name for the Instance that you are building, click ‘Next’.

Select Instance Name

Type your Instance Name

Normally I would recommend having different service accounts for each of the SQL Services that you are installing. However, in this installation I am just using the default local accounts. You will need to have your Domain service accounts created and set the passwords on this Server Configuration screen in the installation. Once you have set the passwords, make sure you click on the Collation Tab so as to configure your Collation for the Instance, click ‘Next’.

Service Account Detail

Service Account Detail

On the Database Engine Configuration screen there are three tabs that we need to pay attention to. The Server Configuration Tab is where you set your security mode – Either Windows (recommended) or Mixed Mode. Remember to add the current account you are running the installation as, as well as any other individuals or groups that need to be members of the SysAdmins group.

The Data Directories Tab allows you to specify where you want to have your User Databases, TempDB and backup locations to be stored. Traditionally you would have four separate drive locations depending on your storage for Data files, Log Files, TempDB and Backups.

The FileStream Tab allows you to Enable Filestream if this is a required option that you need in your environment.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Remember that these same steps need to be completed on the second node that you are including into your SQL Server 2012 AlwaysOn Cluster.

Configuring SQL Server 2012

Now that we have installed two stand-alone instances of SQL Server 2012 on our two servers in the WSFC we need to undertake some post-installation configuration. This is achieved by using the SQL Server Configuration Manager which is available from Start | All Programs | Microsoft SQL Server 2012 | Configuration Tools.

Because the data transfers by SQL Server 2012 AlwaysOn are done via TCP/IP we need to enable this in the Network Configuration Protocols. By default this will be disabled. Change the value to Enabled and click ‘OK’.

We are now at the main point with configuring our SQL Server 2012 AlwaysOn Cluster. Previously, we were creating a Clustered SQL Server Instance and we had to undertake the Clustered Build Option. You will have noticed that we have installed stand-alone instances of SQL Server on each of the nodes participating in the WSFC. We need to enable AlwaysOn Availability Groups. In the ‘SQL Server Configuration Manager’ select the Instance of SQL Server, right click, Select Properties. On the ‘AlwaysOn High Availability’ Tab tick the ‘Enable AlwaysOn Availability Groups’ check box.

Click ‘OK’. The changes will not take effect until the Instance is restarted. You will need to repeat this step on the second instance we installed. (This will need to be done on every instance in your SQL Server 2012 AlwaysOn Cluster)

 Enable AlwaysOn Availability Groups

Enable AlwaysOn Availability Groups

 

We are now ready to start configuring our Availability Groups.

Configuring SQL Server 2012 AlwaysOn Availability Groups

Before SQL Server 2012, one of the options available for you to use to build your High Availability (HA) solution was to utilise Database Mirroring. The Database Mirroring technology is very good at what it was created for. However, it has some limitations when it comes to your HA solution. The limitations include:

  • A Single Secondary database
  • Mirrored database is accessible via db snapshot only until failover occurs
  • Lack of support for MSDTC (distributed transactions)
  • Related databases are not able to be grouped together

SQL Server 2012 AAG’s resolve most of these issues giving you more flexibility over your environment and more granular control over your environment to meet your ever growing complex HA requirements.

With implementing SQL Server 2012 AAG’s, which is still utilising the Database Mirroring technology to transfer your data via TCP/IP either synchronously or asynchronously to one or more replicas but giving you the added advantage of being able to access these replicas. It still does not support transactional consistency for those databases participating in a availability group.

Availability Groups

As its name suggests, an Availability Group is a grouping of related databases. When you were setting up Database Mirroring Before SQL Server 2012, you could set up multiple mirrors, but you were only able to set up to mirror a single database at a time. If you have multiple databases that are reliant on each other for the application to work, there is no simple way of ensuring that all of the databases failed over together. Availability Groups now allow you to group appropriate databases together. You can setup, up to 10 AAG’s on a per instance level. Across these 10 Availability Groups you can have up to 100 replica databases participating.

The benefits given by an Availability Group are that it:

Availability Replicas

Availability replicas provide you the ability to setup:

  • A primary replica which allows you to undertake read and write capabilities against those databases that have been configured in the AAG
  • Up to four secondary replicas which allow you to have read-only capabilities against those databases that have been configured in the AAG. Also allows you to setup the ability to perform backups on these secondaries.

Availability Modes

As mentioned above, when configuring your SQL Server 2012 AlwaysOn Availability Groups, there are some considerations that need to be taken into account when determining what type of availability mode you can use.

If you are wanting to use AAGs for a reporting process, you could have your secondary replica located in the same physical data centre and implement synchronous-commit mode to give you a read only near time group of databases to report against without impacting the performance of the primary databases with reporting overheads. You probably would not consider this type of availability mode where there are large distances between data centres.

If you have the requirement for a reporting process, that does not require the data to be near real time, you could consider implementing your secondary replica in a separate data centre that may be more than 30-40 Kilometers away. If this is the case, you would look at implementing asynchronous-commits for your AAG. By implementing an asynchronous-commit method, you would reduce the latency of the transactions on the primary site but it would open you up to the possibility of data loss.

As you can set up several secondary replicas, you are able to setup different availability modes in your environment. Each AAG is configured separately; for example: you may have two synchronous implementations and two asynchronous implementations.

In this example you would have your primary databases in AAG1 residing in DC1. You then set up a secondary replica that is also located in DC1 in a synchronous-commit mode, thereby allowing you to run your reporting requirements without the reporting overhead impacting on your primary database. This also provides for your HA requirements, by having a secondary environment that is transactionally consistent with the ability to failover to in the event of an issue with your primary databases. You could then setup secondary replicas in DC2, DC3 & DC4 in asynchronous-commit mode. These asynchronous secondary replicas allow you to meet your DR requirements by having multiple copies in multiple geographical dispersed locations, with the ability to failover to in the event of an issue on the primary site.

Failing Over

As with Database Mirroring and Windows Server Failover Clustering, AlwaysOn Availability Groups provide the ability to failover between the primary and secondary replicas that you have setup. There are three forms of failover which can be undertaken with AAG’s:

  • Automatic - Supported by Synchronous-Commit Mode – No Data Loss
  • Manual - Supported by Synchronous-Commit Mode – No Data Loss
  • Forced – Supported by Asynchronous-Commit – Possible Data Loss

The Availability Mode that is in use will depend on whether you are implementing High Availability or Disaster Recovery. This affects the failover setup that you are going to implement in your SQL Server 2012 AlwaysOn environment.

Availability Group Listener

In order to take advantage of the various solutions that we have stepped through in this article, we need to set up and allow for the applications to maintain connectivity to the SQL Server Databases after a failover. This is where the AlwaysOn Availability Group Listeners (AAGL’s) come into use.

An Availability Group Listener is a Virtual Server Name that applications connect to. From the applications point of view it does not matter where the Availability Database is active and available for use. The AAGL consists of:

  • Virtual Network Name (VNN)
  • Listener Port
  • One or more Virtual IP Addresses (VIPs)

For your application to connect, you can either set up a connection string for your AAGL or connect directly to your SQL Server Instance. However, a direct connection does not give the failover support which this technology has been built for.

When a failover occurs for an AAG, the connection from the client is terminated. To gain access again, the client needs to reconnect to the AAGL. To achieve this, the application must be designed and built to poll for the AAGL. Depending on the connection that you are utilising:

  • Primary database
  • Secondary read replica

You will need to configure your ‘ApplicationIntent‘ in your AAGL connection string appropriately.

With these points in mind, we are now able to create our first AAG in several ways, which are to

  • Create Availability Group Wizard
  • TSQL
  • Powershell

Expanding the AlwaysOn High Availability tree | right click Availability Groups | New Availability Group Wizard

New AlwaysOn Availability Group Wizard

New AlwaysOn Availability Group Wizard

Name your AAG, click ‘Next’.

 

Name your AlwaysOn Availability Group

Name your AlwaysOn Availability Group

Select the databases that you need to include in the AAG, click ‘Next’.

Select the databases

Availability Databases

Your primary replica will automatically be available for you to configure. Choose the Availability Mode, Failover strategy and Readable secondary requirements. Click ‘Add Replica’, connecting to your appropriate secondary servers. Ensure that you set your secondary the same as your primary.

Availability Databases

Availability Databases

 

Selecting the Listener Tab, give your AAGL a name, port number and the appropriate IP Addresses, click ‘Next’.

Every replica needs to have access to a shared location to access the database backups created and used for synchronising the secondary databases. Enter your share path, click ‘Next’.

 

Initial Data Synchronization

Initial Data Synchronization

 

Ensure that you receive all green ticks for your validation, click ‘Next’.

Review the summary, click ‘Finish’.

My design has been done in lab. If want to know how to build a lab please visit my friend blog post.

Enjoy configuring your new SQL Server 2012 AlwaysOn environment.

Posted in SQL 2012 | Tagged: , | Leave a Comment »

Reasons to use Microsoft SQL Server 2012

Posted by Alin D on September 5, 2012

The last three releases of SQL Server had unofficial labels, each based on the type of user that the most new features were geared toward. For example, SQL Server 2005 was considered a developer release, while SQL Server 2008 was viewed as a release for database administrators. SQL Server 2008 R2, with Microsoft’s PowerPivot data analysis tool and enhancements to both SQL Server Analysis Services and Reporting Services, was widely known as a business intelligence (BI) release.
Microsoft released the first community technology preview (CTP) of the next version of SQL Server, code-named Denali, at November’s Professional Association for SQL Server (PASS) Summit 2010 in Seattle. It appears that this release won’t continue the labeling tradition; instead, SQL Server Denali has something for everybody. And several of the new features make Denali compelling enough to start evaluating and testing it now, so you’ll be ready to upgrade once the commercial release becomes available later this year or next year.

Column-Store Indexes

First, there’s Project Apollo, which harnesses the same column-store indexes that power PowerPivot in the database engine. Fueled by the database architecture called VertiPaq, column-store indexes are designed to provide significant performance gains over regular indexes.

In a regular index, indexed data from each row is kept together on a single page, and data in each column is spread across all of the pages. A column-store index packs data from each column together, so each page contains data from a single column. Plus, the index data for each column is compressed, and since columns often contain highly repetitive values, you’re looking at a high compression ratio. All this means fewer pages in the index and fewer pages that need to be scanned if you’re selecting a small number of columns.

Since the nature of the data stored in SQL Server varies from instance to instance, it’s difficult to know how much faster data retrieval will be in real-world applications, but Microsoft says it could run from several times to several hundred times faster.

From a business standpoint, this feature could change the way users work with your data warehouse. Traditionally, data warehouse tables are too big to query in real time, so they aren’t suitable for ad hoc queries: The data in those tables needs to be aggregated, cubes need to be built or reports need to be generated and cached overnight in order to provide users with timely responses.

Let’s say you have a $1,000 in your checking account and write $500 in checks. But look online soon afterward, and your balance still shows that $1,000. Banks sometimes take a while to refresh account balances because with large datasets, it’s either impractical or nearly impossible to do real-time updates. Similarly, data warehouses often have to run extract, transform and load (ETL) processes at night to update and aggregate data.

But a column-store index can potentially run a multi-hour query in minutes and a multi-minute query in seconds, so your business could switch to real-time queries. That means no more waiting for days for the data to be collected and aggregated. The bottom line? Your business users and executive decision makers will have the data they need much more readily.

AlwaysOn

Each version of SQL Server has either added new high-availability features or improved existing ones. But while there are several viable options, such as database mirroring, log shipping or SQL Server clustering, each has its own disadvantages; you need to be ready for compromise when choosing one technology over the other.

SQL Server Denali introduces a new technology called AlwaysOn, or High Availability Disaster Recovery (HADR). According to Microsoft, developing it was a significant engineering project, exceeding the effort required to rewrite the SQL Server engine for version 7.0 in the late 1990s, after Microsoft and original development partner Sybase Inc. went their own ways on the database. Look at AlwaysOn as a combination of what worked best in database mirroring and clustering, with new capabilities added based on user feedback.

For example, database mirroring works on a single database, but often a database accesses objects in other databases on the same server. So, let’s say Database A accesses objects in Database B using views or synonyms; those queries work only if both databases are on the same server. But if Database A fails over to the mirror server, the references to the objects in Database B become invalid, and your application starts generating errors.

To enable cross-database dependency in a mirroring environment, AlwaysOn introduces something called an availability group. You can define a group of databases as an availability unit; then they all fail over as a unit, and your cross-database references will remain functional. AlwaysOn also lets you configure several replicas of the primary database. These replicas can be made read-only for reporting purposes. If your primary database fails, one of the replicas can take over in the primary role.

For many businesses, these improved HADR features will be the most attractive reason to upgrade to SQL Server Denali. These days, more e-commerce databases need to be available 24/7, and even a short outage could cost you revenue as well as customers. Therefore, a high-availability option that provides seamless failover with minimal downtime and makes it easier to bring up primary databases on secondary servers can provide significant return on investment.

Data Visualization, aka Project Crescent

This feature is probably the most exciting new BI offering in SQL Server Denali, and more specifically, in the Microsoft self-service BI mix. Crescent continues data visualization enhancements delivered in SQL Server Reporting Services 2008 R2, and later in PowerPivot. It is a Web-based report designer that utilizes Microsoft’s Silverlight application framework and provides ad hoc reporting to end users with an easy-to-use presentation tool and powerful visualization components.

Behind the scenes, Crescent utilizes Microsoft’s Business Intelligence Semantic Model (BISM), a new technology that makes building BI models easier than by using traditional cubes. And that’s made it easier for less-technical users to create BI applications. BISM uses the same VertiPaq technology that’s behind column-store indexes, allowing fast response times when querying large sets of data. When SQL Server Denali was previewed at the PASS Summit, it was Crescent that caused the biggest stir. Audiences were impressed with the demonstrations of the new tool, especially its visual presentation of data, which goes way beyond traditional charts or key performance indicators.

Project Crescent will provide value to companies that want to enable less-technical staff — managers, executives, marketing departments, researchers — to browse data, run ad hoc reporting and present data in a visually striking format. It will also be very useful to companies that specialize in collecting, processing and selling data in report formats.

So not only will Crescent make reports look a whole lot cooler, it will also increase perception of the value of the reports and the work that went into them. Think of it as visiting a fancy French restaurant — the creative presentation of food on your platter not only enhances your enjoyment of the meal, it increases your perception of its value.

Upgrading to SQL Server Denali when it’s released is bound to have a significant impact. Its column-store index technology will give business users quicker, timelier access to the data stored in data warehouses. With Project Crescent, you‘ll be able to put together flashier reports. And if all works as promised, AlwaysOn will reduce the downtime of your SQL Server installations.

Some words of caution, though: The initial previews of Denali are impressive, but we still haven’t seen the “small print” on features — things like restrictions, limitations and licensing requirements. In early 2011, column-store indexes and Project Crescent weren’t even available for evaluation, since they weren’t included in the first CTP. AlwaysOn was, but in a limited feature set. Once all of these features are fully available in a public CTP, SQL Server users will get a better picture of how they can best use them.

Posted in SQL 2012 | Tagged: , , , , , , | Leave a Comment »

How to manage SQL 2012 with powershell

Posted by Alin D on June 24, 2012

The graphical management tools provide just about everything you need to work with SQL Server. Still, there are many times when you might want to work from the command line, such as when you are working on a Windows Server 2008 R2 Core installation. To help with all your command-line needs, SQL Server 2012 includes the SQL Server provider for Windows PowerShell (also known as “SQL Server PowerShell”). To work with SQL Server via Windows PowerShell, you must first open a Command Prompt window or Windows PowerShell prompt and then start SQL Server PowerShell by typing sqlps at the command line.

Windows PowerShell introduces the concept of a cmdlet (pronounced “commandlet”). A cmdlet is the smallest unit of functionality in Windows PowerShell. Cmdlet names are not case-sensitive. SQL Server PowerShell cmdlets include the following:

- Backup-SQLDatabase Performs backup operations on SQL Server databases.

- Convert-UrnToPath Converts a SQL Server Management Object Uniform Resource Name (URN) to a SQL Server provider path. The URN indicates
a management object’s location within the SQL Server object hierarchy. If the URN path has characters not supported by Windows PowerShell, the characters are encoded automatically.

- Decode-SQLName Returns an unencoded SQL Server identifier when given an identifier that has been encoded.

- Encode-SQLName Encodes special characters in SQL Server identifiers and name paths to formats that are usable in Windows PowerShell paths. The characters encoded by this cmdlet include :/%<>*?[]|. If you don’t encode these characters, you must escape them by using the single quotation mark (‘) character.

- Invoke-PolicyEvaluation Evaluates management policies applied to SQL Server instances. By default, this cmdlet reports compliance but does not enforce compliance. To enforce compliance, set –AdHocPolicyEvaluationMode to Configure.

- Invoke-Sqlcmd Runs a Transact-SQL (T-SQL) or XQuery script containing commands supported by the SQLCMD utility. By default, this cmdlet doesn’t set any SQLCMD variables or return message output; only a subset of SQLCMD commands can be used.

- Restore-SQLDatabase Performs restore operations on SQL Server databases.

To get detailed information about a cmdlet, type get-help cmdletname –detailed, where cmdletname is the name of the cmdlet you want to examine. To get detailed information about the SQL Server provider, which provides SQL Server functionality for Windows PowerShell, type get-help sqlserver | more.

 

You can use the sqlps utility on any computer where you’ve installed SQL Server or the command-line management tools. The sqlps utility starts a Windows PowerShell session with the SQL Server PowerShell provider imported so that you can use its cmdlets and work with instances of SQL Server. When you are working with Windows PowerShell or scripts, you can import the SQLPS module to load the SQL Server provider, which automatically loads the required assemblies and initializes the environment. While you previously needed to use an initialization script, this is no longer required so long as you import the SQLPS module prior to trying to access the SQL Server instance. For best results, import the SQLPS module using the following command:

   Import-Module "sqlps" –DisableNameChecking

You can work with cmdlets by executing commands directly at the shell prompt or by running commands from scripts. You can enter any command or cmdlet that you can run at the Windows PowerShell command prompt into a script by copying the related command text to a file and saving the file with the .ps1 extension. You can then run the script in the same way that you would any other command or cmdlet. However, when you are working with Windows PowerShell, the current directory might not be part of the environment path. For this reason, you might need to use the ./ notation when you run a script in the current directory, such as the following:

./runtasks

The current execution policy for SQL Server PowerShell controls whether and how you can run scripts. Although the default configuration depends on which operating system and edition you’ve installed, you can quickly determine the execution policy by entering get-executionpolicy at the Windows PowerShell prompt.

To set the execution policy to require that all scripts have a trusted signature to execute, enter the following command:

set-executionpolicy allsigned

To set the execution policy so that scripts downloaded from the web execute only if they are signed by a trusted source, enter:

set-executionpolicy remotesigned

To set the execution policy to run scripts regardless of whether they have a digital signature and work in an unrestricted environment, you can enter the following command:

set-executionpolicy unrestricted

For administration at the Windows PowerShell prompt, you use Invoke-Sqlcmd to run T-SQL or XQuery scripts containing commands supported by the SQLCMD utility. Invoke-Sqlcmd fully supports T-SQL and the XQuery syntax supported
by the Database Engine, but it does not set any scripting variables by default. Invoke-Sqlcmd also accepts the SQLCMD commands listed in Table 1-3, later in this chapter. By default, results are formatted as a table, with the first result set displayed automatically and subsequent result sets displayed only if they have the same column list as the first result set.

The basic syntax you use most often with Invoke-Sqlcmd follows:

Invoke-Sqlcmd [-ServerInstance ServerStringOrObject]
[-Database DatabaseName] [-EncryptConnection ]
[-Username UserName] [-Password Password] [[-Query] QueryString] [-DedicatedAdministratorConnection]

[-InputFile FilePath] [ | Out-File –filepath FilePath]

The command’s parameters are used as follows:

–Database Specifies the name of the database that you want to work with. If you don’t use this parameter, the database that is used depends
on whether the current path specifies both the SQLSERVER:SQL folder
and a database name. If both are specified, Invoke-Sqlcmd connects to the database that is specified in the path. Otherwise, Invoke-Sqlcmd connects to the default database for the current login ID.

NOTE Use–IgnoreProviderContext to force a connection to the database that is defined as the default for the current login ID.

–DedicatedAdministratorConnection Ensures that a dedicated administrator connection (DAC) is used to force a connection when one might not be possible otherwise.

–EncryptConnection Enables Secure Sockets Layer (SSL) encryption for the connection.

–InputFile Provides the full path to a file that should be used as the query input. The file can contain T-SQL statements, XQuery statements, SQLCMD commands, and scripting variables. Spaces are not allowed in the file path or file name.

 –Password Sets the password for the SQL Server Authentication login ID that is specified in –Username.

–Query Defines one or more queries to be run. The queries can be T-SQL queries, XQuery statements, or SQLCMD commands. Separate multiple queries with semicolons.

TIP You do not need to use the SQLCMD GO command. Escape any double quotation marks included in the string and consider using bracketed identifiers such as [EmpTable] instead of quoted identifiers such as “EmpTable”. To ensure that the message output is returned, add the –Verbose parameter. –Verbose is a parameter common to all cmdlets.

–ServerInstance Specifies the name of an instance of the Database Engine that you want to work with. For default instances, specify only the computer name, such as DbServer23. For named instances, use the format “ComputerNameInstanceName”, such as DbServer23EmployeeDb.

–Username Sets the login ID for making a SQL Server authentication connection to an instance of the Database Engine. You also must set the password for the login ID.

NOTE By default, Invoke-Sqlcmd attempts a Windows authentication connection
by using the Windows account running the Windows PowerShell session. Windows authentication connections are preferred. To use a SQL Server authentication connection instead, specify the user name and password for the SQL login ID that you want to use.

With this in mind, you could replace the following T-SQL statements:

USE OrderSystem;
GO
SELECT * FROM Inventory.Product
ORDER BY Name ASC
GO

with the following Windows PowerShell command:

Invoke-Sqlcmd -Query "SELECT * FROM Inventory.Product; ORDER BY Name ASC"
-ServerInstance "DbServer23OrderSystem"

You also could read the commands from a script, as shown in Sample 1-1. SAMPLE 1-1 Example SQL Command Script.

Contents of SqlCmd.sql Script.

 SELECT * FROM Inventory.Product
 ORDER BY Name ASC

Command to Run the Script

 Invoke-Sqlcmd -InputFile "C:ScriptsSqlCmd.sql"

22 PART I Microsoft SQL Server 2012 Essentials

When you work with Windows PowerShell, don’t overlook the importance of SQL Server support being implemented through a provider. The data that providers expose appears as a drive that you can browse. One way to browse is to get or set the location with respect to the SqlServer: provider drive. The top of the hierarchy exposed is represented by the SQL folder, then there is a folder for the machine name, and finally, there is a folder for the instance name. Following this, you could navigate to the top-level folder for the default instance by entering

Set-Location SQLSERVER:SQLDbServer23Default

You could then determine the available database structures by entering Get-ChildItem (or one of its aliases, such as ls or dir). To navigate logins, triggers, endpoints, databases, and any other structures, you set the location to the name
of the related folder. For example, you could use Set-Location Databases and then enter Get-ChildItem to list available databases for the selected instance. Of course, if you know the full path you want to work with in the first place, you also can access it directly, as shown in the following example:

Set-Location SQLSERVER:SQLDbServer23DefaultDatabasesOrderSystem

Here, you navigate to the structures for the OrderSystem database on DbServer23’s default instance. If you then want to determine what tables are available for this database, you could enter:

Get-ChildItem Tables

Or you could enter:

Set-location Tables
Get-ChildItem

To manage SQL Server 2012 from a computer that isn’t running SQL Server, you need to install the management tools. In the SQL Server Installation Center, select Installation, and then click the New Installation Or Add Features To An Existing Installation option. When the wizard starts, follow the prompts. On the Feature Selection page, select the Management Tools—Basic option to install Management Studio, SQLCMD, and the SQL Server provider for Windows PowerShell.

For remote management via Windows PowerShell, you need to ensure that Windows Remote Management (WinRM) and Windows PowerShell are both installed and made available by using the Add Features Wizard. You also need to enable remote commands on both your management computer and the server running SQL Server.

You can verify the availability of WinRM and configure Windows PowerShell for remoting by following these steps:

1. Click Start, All Programs, Accessories, and Windows PowerShell. Then start Windows PowerShell as an administrator by right-clicking the Windows PowerShell shortcut and selecting Run As Administrator.

  1. The WinRM service is configured for manual startup by default. You
    must change the startup type to Automatic and start the service on each computer you want to work with. At the PowerShell prompt, you can verify that the WinRM service is running by using the following command:

          get-service winrm
  1. f the service is stopped, enter the following command to start the service and configure it to start automatically in the future:
          set-service –name winrm –startuptype automatic –status running
  2. To configure Windows PowerShell for remoting, type the following command:
          Enable-PSRemoting –force

    You can enable remoting only when your computer is connected to a domain or private network. If your computer is connected to a public network, you need to disconnect from the public network and connect to a domain or private network and then repeat this step. If one or more of your computer’s connections has the Public connection type but you are actually connected to a domain or private network, you need to change the network connection type in the Network And Sharing Center and then repeat this step.

In many cases, you can work with remote computers in other domains. However, if the remote computer is not in a trusted domain, the remote computer might not be able to authenticate your credentials. To enable authentication, you need to add the remote computer to the list of trusted hosts for the local computer in WinRM. To do so, type the following:

winrm s winrm/config/client ‘@{TrustedHosts=”RemoteComputer”}’ where RemoteComputer is the name of the remote computer, such as winrm s winrm/config/client ‘@{TrustedHosts=”DbServer23″}’

When you are working with computers in workgroups or homegroups, you must use HTTPS as the transport or add the remote machine to the TrustedHosts configuration settings. If you cannot connect to a remote host, you can verify that the service on the remote host is running and is accepting requests by running the following command on the remote host:

winrm quickconfig
Status   Name
------   ----
Running  WinRM
DisplayName
-----------
Windows Remote Management

24 PART I Microsoft SQL Server 2012 Essentials

This command analyzes and configures the WinRM service. If the WinRM service is set up correctly, you see output similar to the following:

WinRM already is set up to receive requests on this machine.
WinRM already is set up for remote management on this machine.

If the WinRM service is not set up correctly, you see errors and need to respond affirmatively to several prompts that allow you to configure remote management automatically. When this process is complete, WinRM should be set up correctly. Don’t forget that you need to enable remote management on the database server as well as your management computer.


Posted in SQL 2012 | Tagged: , , , , , , | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 504 other followers

%d bloggers like this: