Windows Management and Scripting

A wealth of tutorials Windows Operating Systems SQL Server and Azure

Archive for the ‘SQL’ Category

Articles , Tutorials and reviews about Microsoft SQL servers

Tips to optimize Transact-SQL queries

Posted by Alin D on November 14, 2013

SQL Server databases are the backbone of many enterprise applications, and good Transact-SQL (T-SQL) code is the best way to maximize SQL Server performance. Therefore, it is important for SQL developers to follow T-SQL best practices and guidelines when writing code. This article highlights some common T-SQL best practices to help ensure reliable, robust and efficient SQL code.

Choose appropriate data type

When you create a table, you must decide on the data type to use for column definitions. A data type defines the kind of data you can store in a column. You also use data types to define variables and stored procedure input and output parameters. You must select a data type for each column or variable appropriate to the data stored in that column or variable. In addition, you must consider storage requirements and choose data types that allow for efficient storage. For example, always use tinyint instead of smallint, int or bigint when you want to store whole positive integers between 0 and 255. This is because, tinyint is a fixed 1 byte field whereas smallint is 2 byte, int is 4 byte and bigint is an 8 byte fixed field.

Choosing the right data types also improves data integrity. For example, if you use a datetime data type for a column of dates, then only dates can be stored in this column. However, if you use a character or numeric data type for the column, then eventually someone is able to store any type of character or numeric data value in that column that does not represent a date.

Lastly, choosing the correct data type improves performance by resulting in the correct execution plan.

Avoid using DISTINCT or UNION clauses

Placing a DISTINCT clause in your T-SQL queries takes the results and removes duplicate rows. If you are sure that your query result set will only contain unique rows, one of the T-SQL best practices is to avoid using the DISTINCT clause, as it causes an unnecessary sorting operation.

The UNION clause also adds an additional sorting operation by eliminating duplicate records from two or more SELECT statements. If you use UNION to combine the results of two or more SELECT statements that contain only a single set of data, it is better to use the UNION ALL clause. The UNION ALL does not remove duplicates, and as a result requires the least SQL Server backend processing to perform the union operation.

Avoid using NOLOCK query hint

The NOLOCK query hint is one of the most common T-SQL best practices, but it can also be one of the worst. Most developers think that the risk of using a NOLOCK hint is the possibility of getting inconsistent data, since it only reads rows and doesn’t wait for others to commit other SQL statements such as SELECT and UPDATE. That is true, but there is more to it than just reading uncommitted rows. Transactions do more than just select, update and delete rows. For example, the transaction often requires an index that is updated or runs out of space on the data page. This may require the allocation of new pages and relocation of existing rows on the existing page to this new page, which is called a page split. Because of this, you may miss several rows or have rows twice, which usually is not allowed if you are running your queries without NOLOCK query hint.

Provide full column lists for SELECT or INSERT statement

Another T-SQL best practice is to always provide full column lists that are required for the SELECT and INSERT statement. For example, if you use SELECT * FROM in your code or in a stored procedure, the column list is resolved each time you run the SELECT statement. Moreover, SELECT or INSERT statements generate an error or returns a different set of columns if the underlying tables schema changes.

Therefore, when performing selects, avoid using SELECT * FROM [TableName], instead provide the full column list, as follows:

SELECT [col1],…[coln] FROM [TableName].

Similarly, when performing Inserts, use column list in the INSERT clause, as follows:

INSERT INTO [TableName] [col1],[col2]…[coln])

VALUES (‘Value1, Value2,…ValueN)

Some more T-SQL best practices

Use SET NOCOUNT ON. SET NOCOUNT ON within batches, stored procedures and triggers to increase performance. This is because, when specified, the statement does not return the number of rows affected.

Prefer EXISTS keyword over IN keyword. When checking for the existence of records, favor EXISTS keyword over the IN keyword. This is because the IN keyword operates on lists and returns the complete result set from subqueries before further processing. Subqueries using the EXISTS keyword return either TRUE or FALSE, which is faster because once the match is found, it will quit looking as the condition has proven true.

Avoid cursors. Avoid cursors as much as possible. Instead, use a set-based approach to updating or inserting data from one table to another.

Steer clear of dynamic SQL. Avoid using dynamic SQL; try to find alternatives that do not require dynamic SQL. If you use dynamic SQL, use sp_executesql instead of EXECUTE (EXEC) because sp_executesql is more efficient and versatile than EXECUTE. It supports parameter substitution and generates execution plans that are more likely to be reused by SQL Server.

Use schema-qualified object names. Refer to table names with schema name prefixes. For example, use SELECT * FROM [SchemaName].[TableName] instead of SELECT * FROM [TableName].

About these ads

Posted in SQL | Leave a Comment »

SQL Server 2012 contained database from A to Z

Posted by Alin D on July 23, 2013

Of the many features introduced in SQL Server 2012, the SQL Server 2012 contained database could prove one of the most valuable. Unlike the typical SQL Server database, a SQL Server 2012 contained database is one that’s isolated from the SQL Server instance on which it resides, as well as from other databases on that instance. Such isolation makes managing databases easier, particularly when they’re being moved to a different instance or implemented in a failover cluster.

Prior to SQL Server 2012, all SQL Server databases were considered non-contained.Metadata specific to the database resided outside that database. The server’s default collation could impact queries against the database. And user authentication within the database was tied to the server-level logins defined on the instance.

In SQL Server 2012, all databases are still, by default, non-contained. However, you can now configure any non-system database as contained. That way, metadata will reside within the database it describes. In addition, because the collation model has been greatly simplified, all user data and temporary data will use the default database collation, and all other objects (metadata, temporary metadata, variables, among others) will use the catalog collation, which is Latin1_General_100_CI_AS_WS_KS_SC for all contained databases.

The most significant change that the SQL Server 2012 contained database brings is the “contained user,” a user account created specifically for the contained database. The account is not tied to a server-level login and provides access to the contained database only, without granting permission to other databases or to the instance as a whole.

SQL Server supports two types of contained users:

  • Database user with password: A local database account created with a username and password that are authenticated by the database.
  • Windows principal: A local database account based on a Windows user or group account but authenticated by the database.

You can add either one or both account types to a contained database. In fact, you can still add login-based accounts as well. That’s because SQL Server 2012 supports what are referred to as “partially contained databases,” rather than fully contained ones.

In a fully SQL Server 2012 contained database, no dependencies, such as a Service Broker route or login-based user account, can exist outside the database. But a partially contained database can support both contained and non-contained elements. That means, for example, that you can provide access to the database either through login-based accounts, contained user accounts or both. However, you can still achieve the equivalent of a fully contained database by eliminating any non-contained elements. (Whether SQL Server will eventually support fully contained databases is yet to be seen.)

The SQL Server 2012 contained database

After you’ve isolated your SQL Server 2012 contained database, you can easily move it from one SQL Server instance to another, without having to move a set of SQL Server logins. The contained database stores all the information it needs within that database. This process also makes it easier to set up your high-availability clusters. Because users connect directly to the database, they can easily connect to a second database if failover occurs.

Even if you’re not moving or clustering your databases, the SQL Server 2012 contained database can make user account management easier because you’re not trying to administer both SQL Server logins and database user accounts. You grant access to specific users to specific databases, without those users being able to access anything outside them.

Yet all this good news doesn’t come without a few downsides. For example, a contained database cannot use replication, change tracking or change data capture. And a contained database can raise security concerns. For instance, users granted the ALTER ANY USERpermission can create user accounts and grant access to the database. In addition, the password hashes associated with contained user accounts are stored within the database, making them more susceptible to dictionary attacks. Plus, the contained user accounts cannot use Kerberos authentication, which is available only to the SQL Server login accounts that use Windows Authentication.

Despite their limitations, if contained databases are carefully implemented, you can sidestep some of the security issues and reap the benefits that database isolation provides. The SQL Server contained database offers a level of portability and manageability not seen before in SQL Server. Moving databases is easier. Failover is easier. Managing access is easier. Indeed, the contained database feature in SQL Server 2012 could prove beneficial for any organization looking to streamline its operations.

Configuring and implementing a SQL Server contained database

Before you can configure a SQL Server contained database, you must enable containment on your instance of SQL Server 2012. To do so, run the sp_configure system stored procedure and set the contained databaseauthentication option to 1, as shown in the following T-SQL script:

EXEC sp_configure ‘contained database authentication’, 1;
GO

RECONFIGURE;
GO

As you can see, you must also run the RECONFIGURE statement for your setting to be implemented. Once you’ve done so, you’re ready to set up a SQL Server 2012 contained database. In your database definition, include the CONTAINMENT clause and set the containment type to PARTIAL, as shown in the following example:

USE master;
GO

CREATE DATABASE ContainedDB
CONTAINMENT = PARTIAL;
GO

You can just as easily include the CONTAINMENT clause in an ALTERDATABASEstatement. In either case, once you’ve set up the database to be contained, you’re ready to add a contained user. In the following T-SQL script, the CREATE USER statement defines the user1 account and assigns a password and default schema:

USE ContainedDB;
GO

CREATE USER user1
WITH PASSWORD = N’PaSSw()rd’,
DEFAULT_SCHEMA = dbo;
GO

EXEC sp_addrolemember ‘db_owner’, ‘user1′
GO

Notice that the CREATE USER statement is followed by the sp_addrolememberstored procedure, which assigns the user to the db_owner role.

As the examples have demonstrated, you need to take only three steps to set up a SQL Server 2012 contained database: Enable containment, configure the database for containment, and create the contained user. And if you don’t want to script these settings, you can instead use the SQL Server Management Studio (SSMS)interface to configure them.

After you set up your contained environment, you should try to connect to the contained database by using the contained user to test the connection. In Object Explorer in SSMS, click the Connectbutton and then click Database Engine. When the Connect to Server dialog box appears, enter the instance name and credentials for the contained user.

If you click the Connect button at this point, you’ll receive an error. That’s because you must first specify the target database when connecting with a contained user account. (The database must be part of the connection string.) To add the database, click theOptions button and type the name of the target database into theConnect to database text box.

Now when you click Connect, you’ll be connected to the specified instance, with access only to the contained database.  Notice that the user and database names are included with the instance name. Also notice that only the ContainedDB database is included in the list of databases and that user1 is listed as one of the user accounts.

When working with contained databases, you’ll often want to ensure they’re as fully contained as possible. SQL Server 2012 provides two handy tools for working with containment:

  • Sys.dm_db_uncontained_entities: A system view that lists any noncontained objects in the database. You can use this view to determine what items to address to ensure your database is as contained as possible.
  • Sp_migrate_user_to_contained: A system stored procedure that converts a login-based user to a contained user. The stored procedure removes any dependencies between the database user and the login accounts.

By using these tools, you can achieve a status of full containment, making it easier to manage the database going forward.

Posted in SQL, TUTORIALS | Tagged: | Leave a Comment »

What you need to consider for SQL Security

Posted by Alin D on July 3, 2013

Most of the time SQL server is a repository of sensitive information for organizations, and that is why it is important to ensure that only authorized users have access to this sensitive information. However, securing SQL Server in a way that won’t create errors is not an easy task, and as database administrators (DBAs), we have to perform a series of additional steps to harden security configuration of our SQL Server implementation. Below i will show what i am usually taking in consideration to secure an SQL server.

Authentication

SQL Server supports two modes of authentication: Windows Authentication and Mixed Mode Authentication. In accordance with SQL Server security best practices, always choose Windows Authentication for your SQL Server installation unless legacy applications require Mixed Mode Authentication for backward compatibility and access.

Windows Authentication is more secure than Mixed Mode Authentication and, when enabled, Windows credentials (that is Kerberos or Windows NT LAN Manager [NTLM] authentication credentials) are trusted to log on to SQL Server. Windows logins use a number of encrypted messages to authenticate SQL Server and the passwords are not passed across the network during authentication. Moreover, Active Directory provides an additional level of security with the Kerberos protocol. As a result, authentication is more reliable and managing it can be reduced by leveraging Active Directory groups for role-based access to SQL Server. In comparison to Windows Authentication mode, Mixed Mode Authentication supports both Windows accounts and SQL-Server-specific accounts to log into SQL Server. The logon passwords of SQL logins are passed over the network for authentication, which makes SQL logins less secure than Windows logins.

Secure sysadmin account

The sysadmin (sa) account is vulnerable when it exits unchanged. Potential SQL Server attackers are aware of this, and it makes hacking one step easier if they take control of this powerful account. To prevent attacks on the sa account by name, rename the sa account to a different account name. To do that, in Object Explorer expand Logins, then right-click sa account and choose Rename from the menu. Alternatively, execute the following T-SQL script to rename the sa account:

USE [master]
GO

ALTER LOGIN sa WITH NAME = [<New-name>]
GO

In addition to this, disable the sa account on your SQL Server instance.

Use complex passwords for sa and SQL-Server-specific logins

When Mixed Mode Authentication is used, ensure that complex passwords are used for sa and all other SQL-Server-specific logins on SQL Server. First, check the “Enforce password expiration” and “Enforce password policy” options for sa and all other SQL logins. These two options ensure that all other SQL-Server-specific logins abide by the login policies of the underlying operating system. In addition to this, set the MUST_CHANGE option for any new SQL login. This ensures that logins must change their passwords on first logon.

Membership of sysadmin fixed-server role and CONTROL SERVER permission

Carefully choose the membership of sysadmin fixed-server roles because members of this role can do whatever they want on SQL Server. Moreover, do not explicitly grant CONTROL SERVER permission to Windows logins, Windows Group logins and SQL logins because logins with this permission get full administrative privileges over a SQL Server installation. By default, the sysadmin fixed-server role has this permission granted explicitly.

SQL Server Administration

Avoid managing SQL Server instances using sa or any other SQL login account that has been granted CONTROL SERVER permission or is a member of sysadmin fixed-server role. Instead, institute dedicated Windows logins for DBAs, and assign these logins sysadmin rights on SQL Server for administration purposes. To grant permissions to users, use built-in fixed server roles and database roles, or create your own custom server roles and database roles that meet your needs of finer control over permissions.

Revoke guest user access

By default, guest user exists in every user and system database, which is a potential security risk in a lock down environment because it allows database access to logins who don’t have associated users in the database. Because of this potential security risk, disable guest user access from all user and system databases (excluding msdb). This ensures that public server role members are not able to access user databases on SQL Server instance unless they have been assigned explicit access to these databases.

Limit permissions assigned to a public role

Due to potential security risks, revoke public role access on the following extended stored procedures:

Stored procedures

Furthermore, do not explicitly assign permissions to a public role on user and system stored procedures. To list the stored procedures that are available to a public role, execute the following query:

SELECT  o.[name] AS [SPName]  ,u.[name] AS [Role]

FROM [master]..[sysobjects] o

INNER JOIN [master]..[sysprotects] p

ON o.[id] = p.[id]

INNER JOIN [master]..[sysusers] u

ON P.Uid = U.UID

AND p.[uid] = 0

AND o.[xtype] IN ('X','P')

Reduce SQL Server Surface Area

Configure SQL Server installation with only required features, and disable unwanted features after installation using SQL Server system’s surface area. You can also use the Policy-based Management feature to create system policies for implementing granular configuration settings for one or more SQL Server systems.

Hardening SQL Server Ports

Another SQL Server security best practice is to change the default ports associated with SQL Server installation by using SQL Server Configuration Manager. Furthermore, use specific TCP ports instead of dynamic ports. In addition, make sure that common TCP ports, such as 1433 and 1434 are not used for the client’s requests and communication because, these ports are well known that makes them a common target for hackers.

Disable SQL Server Browser Service

Make sure that SQL Server Browser Service is only running on SQL Servers where multiple instances of SQL Servers are running on a single server. SQL Server Browser Service enumerates SQL Server Information on the network, which is a potential security threat in a lock-down environment.

SQL Server service accounts

Create dedicated low-privilege domain accounts to run SQL Server services. In addition to this, review the membership of SQL Server service accounts on a regular basis, and ensure that they are not members of any domain users group or local groups that would grant them unnecessary permissions. For more information on the permission each SQL Server service account requires, see Configure Windows Service Accounts and Permissions.

Secure SQL Server ErrorLogs and registry keys

Secure SQL Server ErrorLogs and registry keys using NTFS permissions because they can reveal a great deal of information about the SQL Server instance and installation.

Conclusion

As the time passes Securing data has become the most vital part, and we must agree and honor it. Because this is the information that contains our financial, social, business and historical data; and as a DBA it is our prime responsibility to make sure that this has been taken care and secure enough. These are the key points that I’ve collected so far.

Posted in Security, SQL | Leave a Comment »

Unsed features in SQL Server 2012

Posted by Alin D on February 6, 2013

After more than 25 years of working with Microsoft SQL Server, you’d think pretty much everything has been done at least once. I thought it would be a challenge to find anything surprising in a product with roots going back to the mid-1980s. But there have recently been two pretty major changes in SQL Server. Columnstore Indexes and the Hekaton in-memory enhancements offer massive, game-changing improvements in performance great enough to be called a surprise.

Columnstore Indexes

Columnstore Indexes were bundled with Microsoft SQL Server 2012 borrowing on techniques originally developed for the PowerPivot in-memory store. Columnstore changes the way that rows are stored; instead of traditional row-by-row storage, data is stored one column at a time in a new layout that bunches up around a million column values in one large blob structure. This structure allows for incredible data compression.

A new method of processing Microsoft refers to as fast batch mode also speeds up query processing in SQL Server 2012. As Dr. David Dewitt explained at SQL Pass in 2010, the closeness of the successive columns values works well with the nature of modern CPUs by minimizing the data movement between levels of cache and the CPU.

There is, however, one big limitation to the current implementation of Columnstore Indexes. They are read-only, which means that the tables they index will also be read-only. Any table that has a Columnstore Index will not allow any inserts, updates or deletes. To change the data, the Columnstore Index has to be dropped, the necessary changes made and the Columnstore Index rebuilt. This isn’t the kind of operation that’s friendly to an online transaction processing (OLTP) system, which is what makes it solely a data warehousing, online analytical processing (OLAP) feature. It also puts a premium on partitioning on any table with a Columnstore Index. In the next major release of SQL Server, Microsoft is promising enhancements that lift the updatability restriction and also allow the Columnstore to be the clustered index.

I’ve had a chance to try out a Columnstore Index on a few tables. What I’ve found is that it works great when the nature of the query dovetails with the Columnstore. As a rule of thumb, the more columns in the table, the better the results. This is because SQL Server can avoid reading a large part of the index. In other situations, such as one narrow entity-attribute-value table that I work with frequently, the results are mixed. Summary queries that aggregate are much faster, to the tune of three seconds instead of 300, but queries that return all the columns of a small set of rows aren’t helped at all. I’ll be using Columnstore Indexes a lot looking for the 100 times speed improvements.

Hekaton

While Columnstore Indexes make data warehouse applications faster, Hekaton is intended for the other end of the application spectrum: high-volume OLTP systems. Websites, exchanges, manufacturing systems and order-entry systems that execute large numbers of, usually small, transactions are Hekaton’s target. The Hekaton extensions to SQL Server are what is known as an “in-memory” database, but Microsoft has combined several technologies to pump up transaction volume up to 50 times above what could previously be achieved. Hekaton will be included in the next release of SQL Server, which is not yet scheduled for shipment.

Hekaton starts with tables that are stored on disk but are pulled completely into system RAM.  This means will be limited to smaller tables or require a design that separates data with high activity from historical data. This requirement works well with the obvious server trend towards larger and larger amounts of RAM. It’s not uncommon to work with servers with 500 gigabytes or up to two terabytes of RAM. That’s plenty of room for the active data in most applications. The changes don’t stop with the approach to storage.

Code in a Hekaton system is written in good old T-SQL, just like we’ve used for years. But unlike traditional T-SQL, Hekaton code is compiled to native machine code and there’s no interpreter. T-SQL is great for data manipulation, but zipping through business logic isn’t one of its strengths; native compilation should speed things up significantly.

As servers have gained more cores, which are SQL Server’s mechanism for synchronizing data access, contention issues will arise as the system scales up. Hekaton bypasses these issues by implementing its own locking mechanism based on optimistic transactions that are optimized for an in-memory database. This allows many transactions to run simultaneously. However, the ability to mix Hekaton tables and other tables in structures such as a join may be limited. There will be other restrictions as well.

By combining the in-memory data handling, compiled code, and new concurrency control mechanism, the preliminary benchmarks for Hekaton look very promising. At SQL PASS 2012 I saw the development team demonstrate a 25-times throughput improvement in transaction volume. That’s 25 times– not just 25%. These are the kinds of surprising changes still in the cards for SQL Server. I’m looking forward to working with SQL Server more in the near future.

Posted in SQL 2012 | Tagged: , , , | Leave a Comment »

How to create and configure SQL Server 2012 AlwaysOn

Posted by Alin D on November 27, 2012

One of the better-known features in the release of SQL Server 2012 Enterprise Edition is AlwaysOn. This has been designed to meet the ever-increasing need for ‘High Availability’ (HA). AlwaysOn does not use entirely new technologies but makes more effective use of existing technologies that are tried and tested. It aims to provide more granular control to achieve High Availability. Currently, depending on your environment, you could already be using one or more of the following HA components that existed in previous versions of SQL Server:

  • Single Site Windows Server Failover Clustering
  • Multi-Site Windows Server Failover Clustering
  • San level Block Replication
  • Transaction Log Shipping
  • Database Mirroring
  • Transactional Replication
  • Peer-to-Peer Replication

Some of these can take time and resources to implement, and may therefore not be meeting your current requirements. This is where SQL Server 2012 AlwaysOn can help, because it provides the benefits of:

  • Using the WSFC APIs to perform failovers. Shared storage is not required
  • Utilizing database mirroring for the data transfer over TCP/IP
  • providing a combination of Synchronous and Asynchronous mirroring
  • providing a logical grouping of similar databases via Availability Groups
  • Creating up to four readable secondary replicas
  • Allowing backups to be undertaken on a secondary replica
  • Performing DBCC statements against a secondary replica
  • Employing Built-in Compression & Encryption

I’ll need to explain some of these components of AlwaysOn

Windows Server Failover Clustering (WSFC)

Clustering technology has been around for quite some time, starting with Microsoft Clustering Services (MCS) back in NT 4.0 days.. The technology for WSFC is part of the backbone of AlwaysOn. A WSFC cluster is a group of independent servers that work together to increase the availability of applications and services. It does this by monitoring the health of the active node and failing over to a backup node, with automatic transfer of resource ownership, when problems are detected.

Although the WSFC is able to span multiple subnets, a SQL Server which is cluster-aware has not, until now, been able to support a clustered instance of SQL Server across multiple subnets: It has therefore been quite expensive to set up clustering across multiple data centres due to the WSFC requiring shared storage in both data centres as well as the block level SAN replication. This has required a lot of work with your storage vendors to get your setup correct.

AlwaysOn Nodes

The nodes that you will use in your SQL Server 2012 AlwaysOn solution have to be part of a WSFC. The first step we need to undertake in preparing our AlwaysOn nodes is to add the Failover Cluster Feature to each node. I’ll go into detail later on in this article.

AlwaysOn Storage

SQL Server versions prior to SQL Server 2012, being setup as clustered instance on a WSFC require the storage to be presented as shared storage. This requirement leads to the storage being more expensive and a little bit more complicated to configure and administer. With SQL Server 2012 AlwaysOn your solution does not have to utilise shared storage, but can use SAN, DAS, NAS or Local Disk depending on your budget and requirements. I suggest working with your storage providers to come up with the solution you need.

Availability Groups

SQL Server 2012 AlwaysOn allows for the more granular control of your environment with the introduction of AlwaysOn Availability Groups (AAG’s). AAG’s allow you to configure groups of databases that you would like to failover all together when there is a problem with the host server. When configuring your AAG’s you:

  • Configure your AAG on the Primary Replica (Your AAG contains the group of DBs that you wish to group together to failover to your secondary replicas)
  • You will need to configure between one and four secondary replicas, with any combination of Synchronous (Maximum of two) and Asynchronous Mirroring (Your primary replica is available for read and write connectivity, while your secondary replicas can be configured for read-only, read intent or no access)

Maintenance Tasks/ Reporting

AlwaysOnallows you to use the secondary replicas that you would have created when you setup your AAGs to undertake some regular database maintenance tasks to remove some of the performance overheads from your primary production server. Some of the tasks that you could look at undertaking on a secondary replica are:

  • Database Backups
    • Full Backup With Copy_Only
    • Transaction Log Backups
  • DBCC CheckDB
  • Reporting
  • Database Snapshots

Security & Performance

To give you the full benefits of high availability, there will be a lot of movement of data. This brings with it security risks and higher bandwidth demands. To minimise these requirements Transparent Database Encryption (TDE) as well as Backup Compression, are both shipped with the Enterprise Edition,

Implementing AlwaysOn

Now that we have covered off the basics of what an AlwaysOn solution could possibly look like we are ready to start and plan for implementing this solution to meet your ever increasing High-Availability requirements and DR needs.

Building your AlwaysOn Cluster

In this scenario we are going to build a two-node SQL Server 2012 AlwaysOn Cluster. To achieve this, all of the nodes that are going to participate in the SQL Server AlwaysOn Cluster need to have .NET Framework 3.5.1 and the Failover Clustering feature enabled.

Required features for Failover Cluster

Required features for Failover Cluster

Now that we have enabled both of these features we can build our WSFC. From the Control Panel | Administrative Tools | Failover Cluster Manager | Validate a Configuration, we can validate whether our servers are okay to participate in a WSFC.

Validate Failover Cluster

Validate Failover Cluster

Building your Windows Server Failover Cluster

There is no difference between the task of building your WSFC for use with SQL Server 2012 AlwaysOn and your previously built WSFC for SQL Server 2008 R2. If you have never built a WSFC before, you can read more on this here Failover Cluster Step-By-Step Guide. In this article, I am not going to go through the WSFC build, but I need to mention that your WSFC build needs to pass all of the validation rules in order to give you a supported WSFC.

SQL Server 2012 Setup

Now that we have our two nodes in our WSFC, we are ready to start the build process for our SQL Server 2012 AlwaysOn Cluster. We need to make sure that we have our installation media which is available for download from Microsoft SQL Server 2012 Downloads.

On Node1, we start the setup.exe to begin the installation process. We are greeted with the initial screen. You should navigate to the Installation Tab to start the installation, selecting ‘New SQL Server stand-alone installation or add features to an existing installation’.

Stand Alone Installation

Stand-Alone Installation

Enter your product key, click ‘Next’.

Enter Product Key

Enter Product Key

Accept the Terms and Conditions, click ‘Next’.

Ensure you select ‘SQL Server Feature Installation’, click ‘Next’.

SQL Server Feature Installation

SQL Server Feature Installation

Choose the features you need to install, click ‘Next’.

SQL Features

Recomended SQL Server Features

our installation rules will be checked and, as long as there are no issues, you can continue with the installation by clicking ‘Next’.

Enter your SQL Server 2012 Instance Name for the Instance that you are building, click ‘Next’.

Select Instance Name

Type your Instance Name

Normally I would recommend having different service accounts for each of the SQL Services that you are installing. However, in this installation I am just using the default local accounts. You will need to have your Domain service accounts created and set the passwords on this Server Configuration screen in the installation. Once you have set the passwords, make sure you click on the Collation Tab so as to configure your Collation for the Instance, click ‘Next’.

Service Account Detail

Service Account Detail

On the Database Engine Configuration screen there are three tabs that we need to pay attention to. The Server Configuration Tab is where you set your security mode – Either Windows (recommended) or Mixed Mode. Remember to add the current account you are running the installation as, as well as any other individuals or groups that need to be members of the SysAdmins group.

The Data Directories Tab allows you to specify where you want to have your User Databases, TempDB and backup locations to be stored. Traditionally you would have four separate drive locations depending on your storage for Data files, Log Files, TempDB and Backups.

The FileStream Tab allows you to Enable Filestream if this is a required option that you need in your environment.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Click ‘Next’ until you get to the ‘Ready to Install screen. At this point in time you should review what is going to be installed and, if you are happy, then Click the Install button.

Remember that these same steps need to be completed on the second node that you are including into your SQL Server 2012 AlwaysOn Cluster.

Configuring SQL Server 2012

Now that we have installed two stand-alone instances of SQL Server 2012 on our two servers in the WSFC we need to undertake some post-installation configuration. This is achieved by using the SQL Server Configuration Manager which is available from Start | All Programs | Microsoft SQL Server 2012 | Configuration Tools.

Because the data transfers by SQL Server 2012 AlwaysOn are done via TCP/IP we need to enable this in the Network Configuration Protocols. By default this will be disabled. Change the value to Enabled and click ‘OK’.

We are now at the main point with configuring our SQL Server 2012 AlwaysOn Cluster. Previously, we were creating a Clustered SQL Server Instance and we had to undertake the Clustered Build Option. You will have noticed that we have installed stand-alone instances of SQL Server on each of the nodes participating in the WSFC. We need to enable AlwaysOn Availability Groups. In the ‘SQL Server Configuration Manager’ select the Instance of SQL Server, right click, Select Properties. On the ‘AlwaysOn High Availability’ Tab tick the ‘Enable AlwaysOn Availability Groups’ check box.

Click ‘OK’. The changes will not take effect until the Instance is restarted. You will need to repeat this step on the second instance we installed. (This will need to be done on every instance in your SQL Server 2012 AlwaysOn Cluster)

 Enable AlwaysOn Availability Groups

Enable AlwaysOn Availability Groups

 

We are now ready to start configuring our Availability Groups.

Configuring SQL Server 2012 AlwaysOn Availability Groups

Before SQL Server 2012, one of the options available for you to use to build your High Availability (HA) solution was to utilise Database Mirroring. The Database Mirroring technology is very good at what it was created for. However, it has some limitations when it comes to your HA solution. The limitations include:

  • A Single Secondary database
  • Mirrored database is accessible via db snapshot only until failover occurs
  • Lack of support for MSDTC (distributed transactions)
  • Related databases are not able to be grouped together

SQL Server 2012 AAG’s resolve most of these issues giving you more flexibility over your environment and more granular control over your environment to meet your ever growing complex HA requirements.

With implementing SQL Server 2012 AAG’s, which is still utilising the Database Mirroring technology to transfer your data via TCP/IP either synchronously or asynchronously to one or more replicas but giving you the added advantage of being able to access these replicas. It still does not support transactional consistency for those databases participating in a availability group.

Availability Groups

As its name suggests, an Availability Group is a grouping of related databases. When you were setting up Database Mirroring Before SQL Server 2012, you could set up multiple mirrors, but you were only able to set up to mirror a single database at a time. If you have multiple databases that are reliant on each other for the application to work, there is no simple way of ensuring that all of the databases failed over together. Availability Groups now allow you to group appropriate databases together. You can setup, up to 10 AAG’s on a per instance level. Across these 10 Availability Groups you can have up to 100 replica databases participating.

The benefits given by an Availability Group are that it:

Availability Replicas

Availability replicas provide you the ability to setup:

  • A primary replica which allows you to undertake read and write capabilities against those databases that have been configured in the AAG
  • Up to four secondary replicas which allow you to have read-only capabilities against those databases that have been configured in the AAG. Also allows you to setup the ability to perform backups on these secondaries.

Availability Modes

As mentioned above, when configuring your SQL Server 2012 AlwaysOn Availability Groups, there are some considerations that need to be taken into account when determining what type of availability mode you can use.

If you are wanting to use AAGs for a reporting process, you could have your secondary replica located in the same physical data centre and implement synchronous-commit mode to give you a read only near time group of databases to report against without impacting the performance of the primary databases with reporting overheads. You probably would not consider this type of availability mode where there are large distances between data centres.

If you have the requirement for a reporting process, that does not require the data to be near real time, you could consider implementing your secondary replica in a separate data centre that may be more than 30-40 Kilometers away. If this is the case, you would look at implementing asynchronous-commits for your AAG. By implementing an asynchronous-commit method, you would reduce the latency of the transactions on the primary site but it would open you up to the possibility of data loss.

As you can set up several secondary replicas, you are able to setup different availability modes in your environment. Each AAG is configured separately; for example: you may have two synchronous implementations and two asynchronous implementations.

In this example you would have your primary databases in AAG1 residing in DC1. You then set up a secondary replica that is also located in DC1 in a synchronous-commit mode, thereby allowing you to run your reporting requirements without the reporting overhead impacting on your primary database. This also provides for your HA requirements, by having a secondary environment that is transactionally consistent with the ability to failover to in the event of an issue with your primary databases. You could then setup secondary replicas in DC2, DC3 & DC4 in asynchronous-commit mode. These asynchronous secondary replicas allow you to meet your DR requirements by having multiple copies in multiple geographical dispersed locations, with the ability to failover to in the event of an issue on the primary site.

Failing Over

As with Database Mirroring and Windows Server Failover Clustering, AlwaysOn Availability Groups provide the ability to failover between the primary and secondary replicas that you have setup. There are three forms of failover which can be undertaken with AAG’s:

  • Automatic - Supported by Synchronous-Commit Mode – No Data Loss
  • Manual - Supported by Synchronous-Commit Mode – No Data Loss
  • Forced – Supported by Asynchronous-Commit – Possible Data Loss

The Availability Mode that is in use will depend on whether you are implementing High Availability or Disaster Recovery. This affects the failover setup that you are going to implement in your SQL Server 2012 AlwaysOn environment.

Availability Group Listener

In order to take advantage of the various solutions that we have stepped through in this article, we need to set up and allow for the applications to maintain connectivity to the SQL Server Databases after a failover. This is where the AlwaysOn Availability Group Listeners (AAGL’s) come into use.

An Availability Group Listener is a Virtual Server Name that applications connect to. From the applications point of view it does not matter where the Availability Database is active and available for use. The AAGL consists of:

  • Virtual Network Name (VNN)
  • Listener Port
  • One or more Virtual IP Addresses (VIPs)

For your application to connect, you can either set up a connection string for your AAGL or connect directly to your SQL Server Instance. However, a direct connection does not give the failover support which this technology has been built for.

When a failover occurs for an AAG, the connection from the client is terminated. To gain access again, the client needs to reconnect to the AAGL. To achieve this, the application must be designed and built to poll for the AAGL. Depending on the connection that you are utilising:

  • Primary database
  • Secondary read replica

You will need to configure your ‘ApplicationIntent‘ in your AAGL connection string appropriately.

With these points in mind, we are now able to create our first AAG in several ways, which are to

  • Create Availability Group Wizard
  • TSQL
  • Powershell

Expanding the AlwaysOn High Availability tree | right click Availability Groups | New Availability Group Wizard

New AlwaysOn Availability Group Wizard

New AlwaysOn Availability Group Wizard

Name your AAG, click ‘Next’.

 

Name your AlwaysOn Availability Group

Name your AlwaysOn Availability Group

Select the databases that you need to include in the AAG, click ‘Next’.

Select the databases

Availability Databases

Your primary replica will automatically be available for you to configure. Choose the Availability Mode, Failover strategy and Readable secondary requirements. Click ‘Add Replica’, connecting to your appropriate secondary servers. Ensure that you set your secondary the same as your primary.

Availability Databases

Availability Databases

 

Selecting the Listener Tab, give your AAGL a name, port number and the appropriate IP Addresses, click ‘Next’.

Every replica needs to have access to a shared location to access the database backups created and used for synchronising the secondary databases. Enter your share path, click ‘Next’.

 

Initial Data Synchronization

Initial Data Synchronization

 

Ensure that you receive all green ticks for your validation, click ‘Next’.

Review the summary, click ‘Finish’.

My design has been done in lab. If want to know how to build a lab please visit my friend blog post.

Enjoy configuring your new SQL Server 2012 AlwaysOn environment.

Posted in SQL 2012 | Tagged: , | Leave a Comment »

How to protect your SQL server from injection attack

Posted by Alin D on September 21, 2012

A SQL injection attack continues to be one of the biggest threats to SQL Server. A database can be susceptible to such attacks if the database — or the front-end application it supports — contains code that a hacker can exploit by injecting rogue SQL into one of the applications’ data input fields. The application then submits the modified code to the database, which can result in the hacker being able to access sensitive information, modify data or database objects, and run administrative operations against the database or, in some cases, against the host operating system.

To protect their databases, SQL Server and application developers should follow a set of best practices that help mitigate the threat of SQL injection. For both SQL Server and application developers, the primary goal is isolating the SQL command language from user input as much as possible.

An introduction to SQL injection

SQL injection attacks occur when application or database code dynamically generates queries that concatenate command language with user input. The user enters information through the application interface that becomes part of the SQL statement executed against the database. Let’s look at an example: Figure 1 shows a simple web-based application with a single user input field.

The application returns a list of movies that the user has rented. The user enters an account ID and clicks Enter. The application or database concatenates that ID with a predefined SELECT statement so that the ID becomes part of the statement’s WHERE clause. If the user provides a legitimate ID, a list of movies is returned to the interface and everyone is happy.

Now suppose a hacker wants to break into the system to retrieve data or do damage. For instance, the hacker might try to delete all the data from the rentals table (assuming the hacker is able to acquire the name of the table through a separate hack). Instead of providing a simple user ID, the hacker might also add a TRUNCATE TABLE statement, as shown in Figure 2.

 

Notice that the value entered into the user interface now includes a semicolon, followed by the TRUNCATE TABLE statement. Because SQL Server supports the semicolon as a way to terminate a statement and supports multiple statements in a batch, the hacker can easily send those statements to the database. As a result, all data will be deleted from the rentals table.

This, of course, is a very simple scenario, but it illustrates the logic behind SQL injection attacks. The trick is to make sure this can’t happen, which is why it can help to have guidelines on what and what not to do.

Best practices for preventing SQL injection attacks

In an ideal world, the database team has full control of how an application is permitted to access the database. For instance, the team might insist that all queries generated by an application be parameterized or that the application access the database only through stored procedures, leaving the application to call those procedures rather than issuing its own SQL statements.

But the reality of application development and maintenance can make such pronouncements difficult to enforce, particularly when dealing with legacy systems, mixed environments and colliding opinions. Even so, the database developer must remain vigilant to protect against SQL injection attacks. For this reason, we’ve come up with a set of best practices that you should consider when creating database components to support various types of applications. Not all of these suggestions will fit every scenario and some might be only partially applicable, but each one is worth a closer look. From there, you can apply what best fits your needs and situation.

Use stored procedures

When possible, permit applications to interact with the database only through stored procedures. That way, the database account used by the application requires only the permissions necessary to execute the stored procedure, without needing permissions to access the underlying tables. Even if the application code is susceptible to SQL injection attacks, those attacks will fail because the application lacks the permissions necessary to access or manipulate those tables. Plus, stored procedures type-check input parameters, which can help mitigate an attack if a hacker tries to inject a value that violates the type.

Despite these advantages, stored procedures alone are not enough to ward off SQL injections. They should be part of an overall security strategy against such attacks. However, some stored procedures are safer to implement than others. For example, static stored procedures don’t take parameters and therefore can’t be injected with rogue code. Stored procedures that contain only parameterized SQL are also resistant to SQL injection because the data language is kept separate from the command language. In other words, avoid dynamic SQL within your stored procedures whenever possible.

Use dynamic SQL only when you can’t avoid it

Dynamic SQL can greatly increase your risk of SQL injection attacks when the command language is concatenated with user input. In some cases, however, it’s not possible to avoid dynamic SQL. For example, you might want to define a stored procedure that creates a database login, with the login name passed in as a parameter. The problem is that the CREATE LOGIN statement doesn’t accept a variable value for a login name, so you have to build your statement dynamically in order to pass in the name.

One way you can help mitigate the risks associated with dynamic SQL is by properly escaping all user input. Escaping user-supplied values helps to render special characters harmless that can be passed in with that input, such as brackets or single quotes. These characters, when used with other language elements, can pose a threat to the database when concatenated with the static part of the SQL statement. To escape these characters, use the QUOTENAME or REPLACE function as appropriate to handle identifiers and string values. When using either function, be sure to properly calculate the buffer length to allow for the escaped characters or you’ll be opening yourself up to truncation attacks, a type of SQL injection attack that uses truncation to inject rogue code.

Another method you can use to prevent SQL injection attacks is to use the sp_executesql system-stored procedure to execute your dynamic SQL, rather than a simple EXCECUTE statement. By using the sp_executesql stored procedure, you have a way to parameterize the dynamic SQL so the data remains separate from the command language. But be sure to use the stored procedure properly — don’t use it simply as a replacement for EXECUTE. You must include the parameters in the SQL statement when you create it and again when you call the sp_executesql stored procedure.

Use the principle of least access when granting database access

Even if you can limit application access to stored procedures and avoid using dynamic SQL, you must still ensure that you’re restricting database access to the fullest degree. One of your best defenses is to follow the strategy of least privilege. Every database account should be assigned the least privileges necessary to access the database. That is why restricting access to stored procedures can be so effective. Ideally, you grant execute access to the procedures and permit no access to anything else. If you do permit an application to execute SQL statements directly, the application should use a database account with the least privileges necessary, with specific controls over exactly what that account can read and modify. Under no circumstance should you assign an administrative account (such as sysadmin) to an application.

Even within a stored procedure, you should follow the strategy of least access. For example, if you use the EXECUTE AS clause to run the SQL statements in the procedure, specify an account with only minimal privileges. If a high-privilege operation must be performed, create a stored procedure to perform that operation and sign the stored procedure with a certificate. The goal is to ensure that even if an attacker were to discover a security hole in the application, there would be little they could do. Applications that access the database should always be limited to a low-privileged account that only has the minimum rights required to execute the statements it’s permitted to submit to the database.

Use testing and monitoring to guard against SQL injection

No matter how careful you’ve been in protecting against SQL injection attacks, you should still run your database code through the necessary checks to make sure it’s safe. That begins with a code review that looks specifically for issues related to SQL injection susceptibilities such as improper dynamic SQL. From there, you can move on to the QA phase. In addition to any normal testing and verification processes that would be performed, the database, along with the application it supports, should be subjected to SQL injection attempts to try to find vulnerabilities. The testing process should include all levels of SQL injection attacks, including those that attempt to truncate data. Finally, once the database has gone live, log files and other tracking devices should be used to monitor the database for any indication of SQL injections.

Protecting against SQL injection attacks

The guidelines suggested above only touch upon the underlying topics, but they should point you in the right direction. Given the proliferation of mobile apps and the growing need to synchronize data among multiple devices, the need for vigilance is greater than ever. It takes only a single vulnerability to put your entire database at risk, and database security is no place to take shortcuts. Database developers and application developers must work together to avoid such vulnerabilities. SQL injection attacks can be prevented, but only if you take the steps necessary to protect your system.

Posted in SQL | Tagged: , , , , , , | Leave a Comment »

Reasons to use Microsoft SQL Server 2012

Posted by Alin D on September 5, 2012

The last three releases of SQL Server had unofficial labels, each based on the type of user that the most new features were geared toward. For example, SQL Server 2005 was considered a developer release, while SQL Server 2008 was viewed as a release for database administrators. SQL Server 2008 R2, with Microsoft’s PowerPivot data analysis tool and enhancements to both SQL Server Analysis Services and Reporting Services, was widely known as a business intelligence (BI) release.
Microsoft released the first community technology preview (CTP) of the next version of SQL Server, code-named Denali, at November’s Professional Association for SQL Server (PASS) Summit 2010 in Seattle. It appears that this release won’t continue the labeling tradition; instead, SQL Server Denali has something for everybody. And several of the new features make Denali compelling enough to start evaluating and testing it now, so you’ll be ready to upgrade once the commercial release becomes available later this year or next year.

Column-Store Indexes

First, there’s Project Apollo, which harnesses the same column-store indexes that power PowerPivot in the database engine. Fueled by the database architecture called VertiPaq, column-store indexes are designed to provide significant performance gains over regular indexes.

In a regular index, indexed data from each row is kept together on a single page, and data in each column is spread across all of the pages. A column-store index packs data from each column together, so each page contains data from a single column. Plus, the index data for each column is compressed, and since columns often contain highly repetitive values, you’re looking at a high compression ratio. All this means fewer pages in the index and fewer pages that need to be scanned if you’re selecting a small number of columns.

Since the nature of the data stored in SQL Server varies from instance to instance, it’s difficult to know how much faster data retrieval will be in real-world applications, but Microsoft says it could run from several times to several hundred times faster.

From a business standpoint, this feature could change the way users work with your data warehouse. Traditionally, data warehouse tables are too big to query in real time, so they aren’t suitable for ad hoc queries: The data in those tables needs to be aggregated, cubes need to be built or reports need to be generated and cached overnight in order to provide users with timely responses.

Let’s say you have a $1,000 in your checking account and write $500 in checks. But look online soon afterward, and your balance still shows that $1,000. Banks sometimes take a while to refresh account balances because with large datasets, it’s either impractical or nearly impossible to do real-time updates. Similarly, data warehouses often have to run extract, transform and load (ETL) processes at night to update and aggregate data.

But a column-store index can potentially run a multi-hour query in minutes and a multi-minute query in seconds, so your business could switch to real-time queries. That means no more waiting for days for the data to be collected and aggregated. The bottom line? Your business users and executive decision makers will have the data they need much more readily.

AlwaysOn

Each version of SQL Server has either added new high-availability features or improved existing ones. But while there are several viable options, such as database mirroring, log shipping or SQL Server clustering, each has its own disadvantages; you need to be ready for compromise when choosing one technology over the other.

SQL Server Denali introduces a new technology called AlwaysOn, or High Availability Disaster Recovery (HADR). According to Microsoft, developing it was a significant engineering project, exceeding the effort required to rewrite the SQL Server engine for version 7.0 in the late 1990s, after Microsoft and original development partner Sybase Inc. went their own ways on the database. Look at AlwaysOn as a combination of what worked best in database mirroring and clustering, with new capabilities added based on user feedback.

For example, database mirroring works on a single database, but often a database accesses objects in other databases on the same server. So, let’s say Database A accesses objects in Database B using views or synonyms; those queries work only if both databases are on the same server. But if Database A fails over to the mirror server, the references to the objects in Database B become invalid, and your application starts generating errors.

To enable cross-database dependency in a mirroring environment, AlwaysOn introduces something called an availability group. You can define a group of databases as an availability unit; then they all fail over as a unit, and your cross-database references will remain functional. AlwaysOn also lets you configure several replicas of the primary database. These replicas can be made read-only for reporting purposes. If your primary database fails, one of the replicas can take over in the primary role.

For many businesses, these improved HADR features will be the most attractive reason to upgrade to SQL Server Denali. These days, more e-commerce databases need to be available 24/7, and even a short outage could cost you revenue as well as customers. Therefore, a high-availability option that provides seamless failover with minimal downtime and makes it easier to bring up primary databases on secondary servers can provide significant return on investment.

Data Visualization, aka Project Crescent

This feature is probably the most exciting new BI offering in SQL Server Denali, and more specifically, in the Microsoft self-service BI mix. Crescent continues data visualization enhancements delivered in SQL Server Reporting Services 2008 R2, and later in PowerPivot. It is a Web-based report designer that utilizes Microsoft’s Silverlight application framework and provides ad hoc reporting to end users with an easy-to-use presentation tool and powerful visualization components.

Behind the scenes, Crescent utilizes Microsoft’s Business Intelligence Semantic Model (BISM), a new technology that makes building BI models easier than by using traditional cubes. And that’s made it easier for less-technical users to create BI applications. BISM uses the same VertiPaq technology that’s behind column-store indexes, allowing fast response times when querying large sets of data. When SQL Server Denali was previewed at the PASS Summit, it was Crescent that caused the biggest stir. Audiences were impressed with the demonstrations of the new tool, especially its visual presentation of data, which goes way beyond traditional charts or key performance indicators.

Project Crescent will provide value to companies that want to enable less-technical staff — managers, executives, marketing departments, researchers — to browse data, run ad hoc reporting and present data in a visually striking format. It will also be very useful to companies that specialize in collecting, processing and selling data in report formats.

So not only will Crescent make reports look a whole lot cooler, it will also increase perception of the value of the reports and the work that went into them. Think of it as visiting a fancy French restaurant — the creative presentation of food on your platter not only enhances your enjoyment of the meal, it increases your perception of its value.

Upgrading to SQL Server Denali when it’s released is bound to have a significant impact. Its column-store index technology will give business users quicker, timelier access to the data stored in data warehouses. With Project Crescent, you‘ll be able to put together flashier reports. And if all works as promised, AlwaysOn will reduce the downtime of your SQL Server installations.

Some words of caution, though: The initial previews of Denali are impressive, but we still haven’t seen the “small print” on features — things like restrictions, limitations and licensing requirements. In early 2011, column-store indexes and Project Crescent weren’t even available for evaluation, since they weren’t included in the first CTP. AlwaysOn was, but in a limited feature set. Once all of these features are fully available in a public CTP, SQL Server users will get a better picture of how they can best use them.

Posted in SQL 2012 | Tagged: , , , , , , | Leave a Comment »

How to manage SQL 2012 with powershell

Posted by Alin D on June 24, 2012

The graphical management tools provide just about everything you need to work with SQL Server. Still, there are many times when you might want to work from the command line, such as when you are working on a Windows Server 2008 R2 Core installation. To help with all your command-line needs, SQL Server 2012 includes the SQL Server provider for Windows PowerShell (also known as “SQL Server PowerShell”). To work with SQL Server via Windows PowerShell, you must first open a Command Prompt window or Windows PowerShell prompt and then start SQL Server PowerShell by typing sqlps at the command line.

Windows PowerShell introduces the concept of a cmdlet (pronounced “commandlet”). A cmdlet is the smallest unit of functionality in Windows PowerShell. Cmdlet names are not case-sensitive. SQL Server PowerShell cmdlets include the following:

- Backup-SQLDatabase Performs backup operations on SQL Server databases.

- Convert-UrnToPath Converts a SQL Server Management Object Uniform Resource Name (URN) to a SQL Server provider path. The URN indicates
a management object’s location within the SQL Server object hierarchy. If the URN path has characters not supported by Windows PowerShell, the characters are encoded automatically.

- Decode-SQLName Returns an unencoded SQL Server identifier when given an identifier that has been encoded.

- Encode-SQLName Encodes special characters in SQL Server identifiers and name paths to formats that are usable in Windows PowerShell paths. The characters encoded by this cmdlet include :/%<>*?[]|. If you don’t encode these characters, you must escape them by using the single quotation mark (‘) character.

- Invoke-PolicyEvaluation Evaluates management policies applied to SQL Server instances. By default, this cmdlet reports compliance but does not enforce compliance. To enforce compliance, set –AdHocPolicyEvaluationMode to Configure.

- Invoke-Sqlcmd Runs a Transact-SQL (T-SQL) or XQuery script containing commands supported by the SQLCMD utility. By default, this cmdlet doesn’t set any SQLCMD variables or return message output; only a subset of SQLCMD commands can be used.

- Restore-SQLDatabase Performs restore operations on SQL Server databases.

To get detailed information about a cmdlet, type get-help cmdletname –detailed, where cmdletname is the name of the cmdlet you want to examine. To get detailed information about the SQL Server provider, which provides SQL Server functionality for Windows PowerShell, type get-help sqlserver | more.

 

You can use the sqlps utility on any computer where you’ve installed SQL Server or the command-line management tools. The sqlps utility starts a Windows PowerShell session with the SQL Server PowerShell provider imported so that you can use its cmdlets and work with instances of SQL Server. When you are working with Windows PowerShell or scripts, you can import the SQLPS module to load the SQL Server provider, which automatically loads the required assemblies and initializes the environment. While you previously needed to use an initialization script, this is no longer required so long as you import the SQLPS module prior to trying to access the SQL Server instance. For best results, import the SQLPS module using the following command:

   Import-Module "sqlps" –DisableNameChecking

You can work with cmdlets by executing commands directly at the shell prompt or by running commands from scripts. You can enter any command or cmdlet that you can run at the Windows PowerShell command prompt into a script by copying the related command text to a file and saving the file with the .ps1 extension. You can then run the script in the same way that you would any other command or cmdlet. However, when you are working with Windows PowerShell, the current directory might not be part of the environment path. For this reason, you might need to use the ./ notation when you run a script in the current directory, such as the following:

./runtasks

The current execution policy for SQL Server PowerShell controls whether and how you can run scripts. Although the default configuration depends on which operating system and edition you’ve installed, you can quickly determine the execution policy by entering get-executionpolicy at the Windows PowerShell prompt.

To set the execution policy to require that all scripts have a trusted signature to execute, enter the following command:

set-executionpolicy allsigned

To set the execution policy so that scripts downloaded from the web execute only if they are signed by a trusted source, enter:

set-executionpolicy remotesigned

To set the execution policy to run scripts regardless of whether they have a digital signature and work in an unrestricted environment, you can enter the following command:

set-executionpolicy unrestricted

For administration at the Windows PowerShell prompt, you use Invoke-Sqlcmd to run T-SQL or XQuery scripts containing commands supported by the SQLCMD utility. Invoke-Sqlcmd fully supports T-SQL and the XQuery syntax supported
by the Database Engine, but it does not set any scripting variables by default. Invoke-Sqlcmd also accepts the SQLCMD commands listed in Table 1-3, later in this chapter. By default, results are formatted as a table, with the first result set displayed automatically and subsequent result sets displayed only if they have the same column list as the first result set.

The basic syntax you use most often with Invoke-Sqlcmd follows:

Invoke-Sqlcmd [-ServerInstance ServerStringOrObject]
[-Database DatabaseName] [-EncryptConnection ]
[-Username UserName] [-Password Password] [[-Query] QueryString] [-DedicatedAdministratorConnection]

[-InputFile FilePath] [ | Out-File –filepath FilePath]

The command’s parameters are used as follows:

–Database Specifies the name of the database that you want to work with. If you don’t use this parameter, the database that is used depends
on whether the current path specifies both the SQLSERVER:SQL folder
and a database name. If both are specified, Invoke-Sqlcmd connects to the database that is specified in the path. Otherwise, Invoke-Sqlcmd connects to the default database for the current login ID.

NOTE Use–IgnoreProviderContext to force a connection to the database that is defined as the default for the current login ID.

–DedicatedAdministratorConnection Ensures that a dedicated administrator connection (DAC) is used to force a connection when one might not be possible otherwise.

–EncryptConnection Enables Secure Sockets Layer (SSL) encryption for the connection.

–InputFile Provides the full path to a file that should be used as the query input. The file can contain T-SQL statements, XQuery statements, SQLCMD commands, and scripting variables. Spaces are not allowed in the file path or file name.

 –Password Sets the password for the SQL Server Authentication login ID that is specified in –Username.

–Query Defines one or more queries to be run. The queries can be T-SQL queries, XQuery statements, or SQLCMD commands. Separate multiple queries with semicolons.

TIP You do not need to use the SQLCMD GO command. Escape any double quotation marks included in the string and consider using bracketed identifiers such as [EmpTable] instead of quoted identifiers such as “EmpTable”. To ensure that the message output is returned, add the –Verbose parameter. –Verbose is a parameter common to all cmdlets.

–ServerInstance Specifies the name of an instance of the Database Engine that you want to work with. For default instances, specify only the computer name, such as DbServer23. For named instances, use the format “ComputerNameInstanceName”, such as DbServer23EmployeeDb.

–Username Sets the login ID for making a SQL Server authentication connection to an instance of the Database Engine. You also must set the password for the login ID.

NOTE By default, Invoke-Sqlcmd attempts a Windows authentication connection
by using the Windows account running the Windows PowerShell session. Windows authentication connections are preferred. To use a SQL Server authentication connection instead, specify the user name and password for the SQL login ID that you want to use.

With this in mind, you could replace the following T-SQL statements:

USE OrderSystem;
GO
SELECT * FROM Inventory.Product
ORDER BY Name ASC
GO

with the following Windows PowerShell command:

Invoke-Sqlcmd -Query "SELECT * FROM Inventory.Product; ORDER BY Name ASC"
-ServerInstance "DbServer23OrderSystem"

You also could read the commands from a script, as shown in Sample 1-1. SAMPLE 1-1 Example SQL Command Script.

Contents of SqlCmd.sql Script.

 SELECT * FROM Inventory.Product
 ORDER BY Name ASC

Command to Run the Script

 Invoke-Sqlcmd -InputFile "C:ScriptsSqlCmd.sql"

22 PART I Microsoft SQL Server 2012 Essentials

When you work with Windows PowerShell, don’t overlook the importance of SQL Server support being implemented through a provider. The data that providers expose appears as a drive that you can browse. One way to browse is to get or set the location with respect to the SqlServer: provider drive. The top of the hierarchy exposed is represented by the SQL folder, then there is a folder for the machine name, and finally, there is a folder for the instance name. Following this, you could navigate to the top-level folder for the default instance by entering

Set-Location SQLSERVER:SQLDbServer23Default

You could then determine the available database structures by entering Get-ChildItem (or one of its aliases, such as ls or dir). To navigate logins, triggers, endpoints, databases, and any other structures, you set the location to the name
of the related folder. For example, you could use Set-Location Databases and then enter Get-ChildItem to list available databases for the selected instance. Of course, if you know the full path you want to work with in the first place, you also can access it directly, as shown in the following example:

Set-Location SQLSERVER:SQLDbServer23DefaultDatabasesOrderSystem

Here, you navigate to the structures for the OrderSystem database on DbServer23’s default instance. If you then want to determine what tables are available for this database, you could enter:

Get-ChildItem Tables

Or you could enter:

Set-location Tables
Get-ChildItem

To manage SQL Server 2012 from a computer that isn’t running SQL Server, you need to install the management tools. In the SQL Server Installation Center, select Installation, and then click the New Installation Or Add Features To An Existing Installation option. When the wizard starts, follow the prompts. On the Feature Selection page, select the Management Tools—Basic option to install Management Studio, SQLCMD, and the SQL Server provider for Windows PowerShell.

For remote management via Windows PowerShell, you need to ensure that Windows Remote Management (WinRM) and Windows PowerShell are both installed and made available by using the Add Features Wizard. You also need to enable remote commands on both your management computer and the server running SQL Server.

You can verify the availability of WinRM and configure Windows PowerShell for remoting by following these steps:

1. Click Start, All Programs, Accessories, and Windows PowerShell. Then start Windows PowerShell as an administrator by right-clicking the Windows PowerShell shortcut and selecting Run As Administrator.

  1. The WinRM service is configured for manual startup by default. You
    must change the startup type to Automatic and start the service on each computer you want to work with. At the PowerShell prompt, you can verify that the WinRM service is running by using the following command:

          get-service winrm
  1. f the service is stopped, enter the following command to start the service and configure it to start automatically in the future:
          set-service –name winrm –startuptype automatic –status running
  2. To configure Windows PowerShell for remoting, type the following command:
          Enable-PSRemoting –force

    You can enable remoting only when your computer is connected to a domain or private network. If your computer is connected to a public network, you need to disconnect from the public network and connect to a domain or private network and then repeat this step. If one or more of your computer’s connections has the Public connection type but you are actually connected to a domain or private network, you need to change the network connection type in the Network And Sharing Center and then repeat this step.

In many cases, you can work with remote computers in other domains. However, if the remote computer is not in a trusted domain, the remote computer might not be able to authenticate your credentials. To enable authentication, you need to add the remote computer to the list of trusted hosts for the local computer in WinRM. To do so, type the following:

winrm s winrm/config/client ‘@{TrustedHosts=”RemoteComputer”}’ where RemoteComputer is the name of the remote computer, such as winrm s winrm/config/client ‘@{TrustedHosts=”DbServer23″}’

When you are working with computers in workgroups or homegroups, you must use HTTPS as the transport or add the remote machine to the TrustedHosts configuration settings. If you cannot connect to a remote host, you can verify that the service on the remote host is running and is accepting requests by running the following command on the remote host:

winrm quickconfig
Status   Name
------   ----
Running  WinRM
DisplayName
-----------
Windows Remote Management

24 PART I Microsoft SQL Server 2012 Essentials

This command analyzes and configures the WinRM service. If the WinRM service is set up correctly, you see output similar to the following:

WinRM already is set up to receive requests on this machine.
WinRM already is set up for remote management on this machine.

If the WinRM service is not set up correctly, you see errors and need to respond affirmatively to several prompts that allow you to configure remote management automatically. When this process is complete, WinRM should be set up correctly. Don’t forget that you need to enable remote management on the database server as well as your management computer.


Posted in SQL 2012 | Tagged: , , , , , , | Leave a Comment »

New T-SQL commands in SQL Server 2012 – part 2

Posted by Alin D on May 18, 2012

I’d gladly accept a dime for every time I’ve heard an Oracle programmer say to a SQL Server programmer, “You’re just getting that feature now? Why, we Oracle users have had that for years!” Well, here’s another case where Oracle folks can finally stop thumbing their nose at SQL Server developers: the SEQUENCE object.

Sequences work a little like identity columns, except they’re not bound to a particular table but rather to a given schema. When created, the object follows certain rules to define the sequence. Sequences are also not inherently unique by design, unlike identity columns, which means they can be used in a whole variety of contexts at once — for example, they can be applied to tables or used as part of loops in a procedure.

Let’s take a look:

CREATE SEQUENCE mySchema.Identity
START WITH 1
INCREMENT BY 1
MINVALUE 1
NO MAXVALUE
NO CYCLE;

This would create a new sequence that starts at 1, increments by 1, has no maximum value and does not restart (CYCLE) from MINVALUE. The lack of a maximum value implies that no cycling takes place, but I wanted to be explicit in this example.

n the other hand, there’s this:

CREATE SEQUENCE mySchema.Identity2
START WITH -10
INCREMENT BY 1
MINVALUE 1
MAXVALUE 10
CYCLE;

This starts a sequence at -10, which increments by one until it reaches 10 and then starts over again at one. In this way, the numbering and behavior sequences are a bit more flexible than ID columns. For instance, you could share a specific number sequence between multiple tables without having to manually draft code (such as a user-defined function) to do it.

To use sequence values, you use the NEXT VALUE FOR statement. Here’s one example using a SELECT:

SELECT NEXT VALUE FOR mySchema.Identity as IdentityFromSequence;

This retrieves whatever the current sequence value is for mySchema.Identity and then increments it according to the rules for the sequence. Another way to use it would be with an INSERT:

INSERT myTable (myColumn)
VALUES (NEXT VALUE FOR mySchema.Identity);

You could also use it in the context of a variable:

SET @myVariable=NEXT VALUE FOR mySchema.Identity;

TRY_CONVERT and TRY_PARSE
These are minor but useful new additions that take some of the pain out of converting data types. TRY_CONVERT takes a value passed to it and attempts to return a valid conversion value. If it can’t, it returns null instead of throwing an error. Here’s an example:

SELECT TRY_CONVERT(DATETIME, ‘This is not a date’)

It returns null. This is useful if you want to use the null value as part of the processing for a procedure, for instance, and don’t want it to choke if it’s passed as an invalid value.

TRY_PARSE does the same thing as a variant on the original PARSE command. Microsoft recommends using PARSE for converting strings to dates or numbers and CONVERT for more general type conversions.

DATEFROMPARTS
This returns a date value based on a year, month, and day value.

SELECT DATEFROMPARTS (2012,05,1)

This stored procedure returns a date value of May 1st, 2012.

IIF
This expression seems to have been ported in from Excel (the favorite data front end of pivot-table lovers everywhere). It’s essentially a shorthand way of creating a CASE statement:

SELECT IIF (1>2,’Impossible’,’Possible’)

The first part is a condition to test. The second is what expression to return if the test returns true; the third is returned if the test proves false. Note that you can only nest up to 10 IIFs, in the same way you can only nest up to 10 CASE statements. (Frankly, if you’re nesting more than 10 of those, you need to rethink the way you’re writing your T-SQL statements.)

CONCAT
This is a quick way to create a string from one or more values, all of which are implicitly converted to strings.

SELECT CONCAT(‘Once upon’,’a time’,32+32)

This would return this:

Once upona time64

Note the missing spaces. CONCAT doesn’t add those between expressions.

Also note that the return type will depend largely on the arguments supplied, which can range from varchar to nvarchar(max). Generally, CONCAT tries to return a type that is best suited to the total length of the string and the expressions passed to it. For instance, if none of the input arguments works as a supported large object type, then the return type is never more than 8,000 characters.

There’s a little something for everyone in these new features. Those who want slightly better error-handling have the THROW function; those who are tired of building custom pagination solutions for data have OFFSET/FETCH; those who are looking for more elegant ways to rename result columns from stored procedures have WITH RESULT SETS; and those looking for a more flexible approach to serial numbering have sequences.

Posted in SQL | Tagged: , , , , , | Leave a Comment »

New T-SQL commands in SQL Server 2012 – Part 1

Posted by Alin D on May 18, 2012

Each version of SQL Server has brought incremental changes to Transact-SQL (T-SQL), making programmers’ jobs much easier and rolling in features that the SQL Server user community demands. SQL Server 2012 is no different. In this article, I will examine several new additions to T-SQL that ought to make users a little happier.

WITH RESULT SETS (in EXECUTE)
WITH RESULT SETS addresses a problem that plagues most anyone who has coded a stored procedure as part of their business logic: names for columns tend to be set in stone.

Let’s say you created a stored procedure that returns a few columns of data with specific names and data types. Every time you run that stored procedure, though, you can get only the results with those names and data types. If you change the stored procedure to modify the output, you run the risk of becoming incompatible with the other components — both inside SQL Server and outside — that expect the data to return in the same format.

There have been a number of attempts to fix this problem—for instance, creating a parallel stored procedure that returns results in a new format and then gradually migrating all the logic over to the new procedure. Unfortunately, that often means maintaining two stored procedures side by side during the migration period, for however long that lasts.

WITH RESULT SETS is another way to handle the problem, and in my opinion a much more elegant one. It lets you redefine the names and data types of a result set from a stored procedure through the instructions used to call the stored procedure. Here’s an example:

EXEC myStoredProcedure 123

It would normally return an int column named Result_Code. Due to changes in our business logic, we need to have that column named ResultCode. (An applicable situation would be instituting a standard for how columns should be named, and underscores are not permitted). Rather than make changes to the stored procedure itself, we just change where it’s invoked, like so:

EXEC myStoredProcedure 123
WITH RESULT SETS
([ResultCode] int NOT NULL)

Another trick you can use with WITH RESULT SETS is to return multiple result sets:

EXEC myStoredProcedure 123
WITH RESULT SETS
(
([ResultCode] int NOT NULL),([Result_Code] int NOT NULL)
)

This returns two result sets to the client — the first in the “new” format and the second in the “legacy” format. This way, you can let the client choose which one is appropriate from the same command.

OFFSET and FETCH (in ORDER BY)

Most SQL Server professionals find themselves going out of their way to provide results in a paginated format. There’s been any number of half-hearted solutions to this, from simply caching the entire result set locally on the application side and doing the pagination there to queries using NOT IN or other performance-sapping behaviors.

The new OFFSET command builds in an elegant way, doing pagination entirely on the query side. Let’s say you write this:

SELECT [UserID], [UserName]
FROM [UserTable]
ORDER BY [Username] ASC
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY

The results are more or less what you’d expect: SQL Server reads from the table, skips the first 10 rows returned by the query and then returns only the next 10 rows. This is a lot less awkward than using nested queries with TOP x-type restrictions.

Note that when you retrieve results using OFFSET/FETCH, the results are considered a single isolated transaction, not a cursor-like object. Let’s say the following happens:

  1. You fetch the first 10 rows from a table using a command similar to the above.
  2. While you’re processing those rows, an INSERT is run against the table to add rows that would show up in the first page of your query.
  3. You fetch the next 10 rows, but those will be the next 10 rows including what was added to the table, not the next 10 rows based on the results from the first query.

Take heed when writing your application and make sure the way your data paginates under live conditions isn’t counterintuitive (or, worse, may create conditions that cause problems with data integrity).

THROW
Error handling in SQL Server is normally handled through the RAISERROR command. RAISERREOR has a few restrictions, though: it can only return an error code defined in sys.messages, although you can use an error number greater than 50,000 to create a custom error. (The default is 50,000, but you can specify an error number). That means it’s most useful for raising system-level errors, rather than errors that relate specifically to your database. The new THROW command allows for error-catching operations that are better suited to T-SQL user applications. Let’s compare the ways it differs from RAISERROR so we can see how they stack up against each other and where each is appropriate.

  1. Most important: RAISERROR always generates a new exception whenever it’s called, so any previously generated exceptions during the routine (for instance, something outside of a CATCH block) are lost. THROW can re-throw the original exception that triggered the CATCH block, so it can provide more detailed context about the error.
  2. RAISERROR is used to generate application- and system-level error codes. THROW generates application-level errors (50,000 or greater) only.
  3. RAISERROR can only pass a custom error message if you use error code 50,000 or greater. THROW lets you pass any error text you want.
  4. RAISERROR supports token substitutions; THROW does not.
  5. RAISERROR supports any severity level of error; THROW only supports error severity 16.

In short, THROW is designed more to be used with T-SQL scripts and stored procedures where you need to return custom errors that are specific to the application you’re creating. Microsoft MVP Leonard Lobel also has a very good dissection of THROW vs. RAISERROR on his blog, with some detailed examples of where each is appropriate.

Posted in SQL | Tagged: , , , , | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 166 other followers

%d bloggers like this: