Wednesday, 14 November 2007

Scripts to Drop and ReCreate Indexes in SQL Server 2005

SQL Server 2005 “GENERATE SCRIPTS” wizard does not have the option to generate the Index scripts alone from the database. The below script can be used to DROP and recreate the Indexes from the database.

Steps to be followed

1. Create a function to retrieve the list of key columns used in the Index definition (this function is useful for retrieving the multiple index columns in case of composite Index)

@objname VARCHAR(50),
@indid INT

DECLARE @i int, @thiskey NVARCHAR(131) , @objid BIGINT

SELECT @objid = object_id(@objname)
SELECT @keys = index_col(@objname, @indid, 1), @i = 2

IF (indexkey_property(@objid, @indid, 1, 'isdescending') = 1)
SELECT @keys = @keys + '(-)'
SELECT @thiskey = index_col(@objname, @indid, @i)
IF ((@thiskey is not null) and (indexkey_property(@objid, @indid, @i, 'isdescending') = 1))
SELECT @thiskey = @thiskey + '(-)'
WHILE (@thiskey is not null )

SELECT @keys = @keys + ', ' + @thiskey, @i = @i + 1
SELECT @thiskey = index_col(@objname, @indid, @i)
IF ((@thiskey is not null) and (indexkey_property(@objid, @indid, @i, 'isdescending') = 1))
SELECT @thiskey = @thiskey + '(-)'



2. Execute the below script to generate the CREATE INDEX statements from the database. I have written this script to generate the scripts for NON CLUSTERED indexes for clustered indexes change the where clause type_desc = ‘CLUSTERED’

NAME + ' ' +' ON ' +
DBO.fGetIndexCols (object_NAME(OBJECT_ID), index_id) +
' ON ['+
) + ']' IndexScript
AND Is_Primary_Key =0

3. Execute the below script to generate the DROP INDEX statements

AND Is_Primary_Key =0

I have continued this article with a modified query to generate index using included columns

Thursday, 7 June 2007

SQL Server 2008 CTP is released

Microsoft has released its much-awaited SQL Server 2008 (known popularly as “Katmai”) CTP version that is designed to meet the Data Platform vision of the Microsoft. The SQL Server 2008 capabilities deliver on the four key areas of the data platform vision.

  • Mission-Critical Platform –SQL Server 2008 Declarative management Framework (DMF) will allow you to manage your SQL Server configuration across many databases and servers by defining policy rules that are automatically applied, monitored and enforced. SQL Server 2008 also protects valuable information in existing applications and disconnected devices. In addition, SQL Server 2008 delivers predictable query performance with an optimized platform.
  • Dynamic Development – SQL Server 2008 along with the .NET Framework enables developers to build the next generation of applications. Developers are more productive because they work with business entities instead of tables and columns. They can build applications that enable users to take their data with them and synchronize their data with back-end servers.
  • Beyond Relational Data – SQL Server 2008 supports developers to consume any type of data, from XML to documents and build applications that incorporate location awareness that can solve existing globalization problems.
  • Pervasive Business Insight – SQL Server 2008 provides a scalable infrastructure that can manage reports and analysis of any size or complexity while at the same time empowering users because of its close integration with the Microsoft Office System. This enables IT to drive business intelligence throughout the organization. SQL Server 2008 makes great strides in data warehousing, enabling users to consolidate data marts in an enterprise data warehouse.

To know more about Microsoft Data Platform Vision and how SQL Server 2008 meets the needs of the next generation of data-driven applications please find the below white paper from Microsoft

Microsoft also allows the free download of the CTP version for those registered with the SQL Server 2008 CTP program.

Saturday, 2 June 2007

MARS (Multiple Active Result Sets)

MARS for (Multiple Active Result Sets) is a new feature supported in SQL Server 2005 Data access that allows multiple requests to interleave in the server. It allows execution of multiple requests within a single connection through allowing request to run and, within the execution, allows another requests to execute. However execution of MARS is interleaving and not performing parallel execution.

The MARS infrastructure allows multiple batches to execute in an interleaved fashion, though execution can only be switched at well-defined points. As a matter of fact, most statements must run atomically within a batch. The following statements are supported for MARS





  • BULK INSERT (or BCP interface)

The behavior of MARS with more than one request running under the same transaction under different scenarios is explained in detail


Support for Multiple Active Result Sets (MARS) in Microsoft SQL Server 2005 increases the performance tuning options of the application. It brings the cursor-programming model closer together with the performance and power of the default-processing mode of the relational engine. However it cannot be considered as a replacement for cursor programming.

Monday, 28 May 2007

Online Indexing in SQL Server 2005

Online Indexing is a new feature available in SQL Server 2005. In SQL Server 2005, DBAs can create, rebuild, or drop indexes online. The index operations on the underlying table can be performed concurrently with update or query operations. During these offline operations, the indexing operations hold exclusive locks on the underlying table and associated indexes. During online index operations, SQL Server 2005 eliminates the need of exclusive locks
Online Index Operation

The online index operation can be divided into three phases Preparation, Build and Final which are explained in detail

Preparation Phase
During the preparation phase, the following activities take place:
  • A snapshot of the source table is acquired. When taking a snapshot of the table, a shared lock is obtained on the table. This is done for transactional level read consistency.
  • An Intent share lock is also obtained. This lock is maintained until the final phase.
  • A resource lock INDEX_BUILD_INTERNAL_RESOURCE is acquired to prevent concurrent DDL operations
  • Metadata is created. The logical metadata consists of Index ID, Index name, keys, and attributes

Build Phase

During the build phase:

  • The new index structure created in the preparation phase is populated with the sorted data from existing data source.
  • The shared lock acquired in the preparation phase is released. Concurrent DML operations can be performed during the Build phase.
  • If the clustered index is created or rebuilt online, a temporary nonclustered index, called the mapping index, is created in the build phase.

When a clustered index is created or dropped and non-clustered indexes exist for that table, the preparation and build phases are repeated twice; once for the new clustered index and again for the nonclustered indexes

Final Phase

The Final phase is used to inform the system that the index is ready. These are the following activities that take place during this phase:

  • The IS lock is upgraded to a Schema Modify (Sch-M) or Shared(S) lock depending on the type of index operation.
  • If a clustered index is created, then all the nonclustered indexes are rebuilt. Due to the locks held on the table, concurrent DML operations are not allowed on the table at this time.
  • The Sch-M or S lock is released once the index is in the ready state.

Online indexing can be performed through the following T-SQL statement as shown below

CREATE INDEX IX_Patient on PDS.Patient (Identifier) (ONLINE=ON)

Best Practices

The following points should be considered before using the online indexing feature

  • Backup the transaction log and truncate it before running large-scale index operations online.
  • It is recommended to have the SORT_IN_TEMPDB option set to ON. This will separate the index operations and the user transactions.
  • Have the recovery model set to SIMPLE or bulk logged so that minimal logging of index transactions takes place.
  • Do not run the online index operation in an explicit transaction. The log of the user database cannot be truncated until the explicit transaction ends.

The Online Indexing feature is available only in the Enterprise Edition of SQL Server 2005.

Thursday, 3 May 2007

Performance tuning using Include columns in SQL Server 2005

SQL Server 2005 extends the functionality of non clustered indexes by adding non key columns to the leaf level of the non clustered index using the INCLUDE option in the CREATE INDEX statement. These INCLUDE index option is a slight variation of covering index for improved performance. By including non key columns, you can create non clustered indexes that cover more queries. The benefits of using the INCLUDE option (also called non key non clustered index) in the INDEX are


  • All data types are supported, except text, ntext, and image. So more data type options than a covering index.
  • The maximum number of columns that can be included is 1024, where as only 16 in covering indexes.
  • Included Columns are not considered by the Database Engine when calculating the number of index key columns or index key size.The actual index is narrower so the key can be more efficient and can offer better performance where as in covering index all of the columns are part of the key.

The include columns indexes are also having the same disadvantages of the covering indexes such as

  • More space is required to store indexes with non key columns. Non key column data is stored at both the leaf level of the index and in the table itself.
  • Larger indexes mean fewer rows can fit on a page, potentially increasing disk I/O.
  • Index maintenance is increased for data modifications, potentially hurting performance if non key columns are large and the database experiences a high level of data modifications.

Syntax for INCLUDE column

ON dbo.Employee (KEYCOLUMN1)

However care should be taken before converting the covering indexes to non key / include column indexes. The execution plan of both the INDEX options should be compared before deciding the best INDEX option.

Monday, 23 April 2007

Schema Management in SQL Server 2005

The schema changes can be effectively managed in SQL Server 2005 by understanding its internal behavior. The following points should be kept in mind before making changes the schema in large tables

The fastest schema changes will occur when

  • Adding a column with NULL property
  • Adding a column with NULL property and DEFAULT
  • Changing NOT NULL to NULL property
  • Adding DEFAULT constraint
  • Dropping CHECK or DEFAULT constraint

The slowest schema changes have accompanying read and write operations such as:

  • CHECK constraints require a read of the entire table to enforce the CHECK.
  • Batch updates occur when adding a NULL column in combination with DEFAULT and WITH VALUES clauses
  • Batch updates occur when adding NOT NULL column (to provide Default value)
  • Batch updates occur when changing NULL to NOT NULL property
  • Batch updates occur when column data types are changed or when lengths change. The only exception is increasing the length of varchar.

The schema change best practice for performance and concurrency of large tables includes adding columns with NULL properties and avoid batch updates.The more detailed performance and concurrency issues during the schema cahnge can be found at the below link

Sunday, 22 April 2007

SQL Server 2005 and 2000 on same machine

The installation of SQL Server 2005 and 2000 on the same machine may lead to the problem that the SQL Server 2000 server will not be visible to the client machines. This problem can be resolved using the SQL Server Browser Service which comes along with the SQL Server 2005 installation

What is Browser Service?

SQL Server Browser runs as a Windows service on the server. SQL Server Browser listens for incoming requests for SQL Server resources and provides information about SQL Server instances that are installed on the computer. SQL Server Browser contributes to three actions:

  • Browsing a list of available servers
  • Connecting to the correct server instance
  • Connecting to Dedicated Administrator Connection (DAC) endpoints

For each instance of the Database Engine, the SQL Server Browser service (sqlbrowser) provides the instance name and the version number. SQL Server Browser is installed with SQL Server 2005 and provides assistance for previous versions of SQL Server that are running on that computer, starting with SQL Server 7.0.
In SQL Server 2000, the identification of the server connection endpoints was performed by the SQL Server service. SQL Server 2005 replaces that function with the SQL Server Browser service. If you install SQL Server on a computer that is also running SQL Server 2000 or MSDE, they must be upgraded to SP3 or later. Versions earlier than SP3 do not properly share port 1434 and might not make your SQL Server instances available to requesting client applications. Although you can change the startup order so that the SQL Server Browser service starts before SQL Server 2000 or MSDE, the recommended resolution is to update all older versions of SQL Server to the latest service pack.

Trouble shooting using Browser Service

When an instance of SQL Server 2000 is installed on the computer, if SQL Server Browser is not running, the SQL Server 2000 listener service will start. If SQL Server Browser starts after the listener service, it waits five seconds for SQL Server 2000 to give up port 1434. If that does not occur, SQL Server Browser will not start.
To resolve this problem with versions of SQL Server 2000 earlier than SP3, stop SQL Server 2000, start SQL Server Browser, and then restart SQL Server 2000. The SQL Server 2000 listener service will continue to try to start on port 1434, so the SQL Server 2000 instance should be upgraded to SP3 as soon as possible.
when your application is accessing SQL Server across a network, if you stop or disable the SQL Server Browser service, you must assign a specific port number to each instance and write your client application code to always use that port number.

Tuesday, 27 March 2007

Paging with MS SQL Server 2005

Paging can be easily achieved in MS SQL Server 2005 by using the ROW_NUMBER function and CTE (Common Table Expressions). The ROW_NUMBER function provides the ability to issue a query and just return a subset of the result set and the CTE allows us to define the temporary result set with clause. The below example shows how to retrieve first ten Patients from a result set using the CTE and ROW_NUMBER function.

USE Patient_DB;
WITH PatientDetail
ROW_NUMBER() OVER (order by Surname) AS 'RowNumber'
FROM Patient
RowNumber between 1 and 10;

The OVER clause is used to determine the partitioning and ordering of the intermediary result set before the ROW_NUMBER function is applied. The SELECT statement can be parameterized to retrieve data for the specified Range as given below.

USE Patient_DB;
WITH PatientDetail
ROW_NUMBER() OVER (order by Surname) AS 'RowNumber'
FROM Patient

WHERE RowNumber between
@RowNumberFrom and @RowNumberTo;

Monday, 26 March 2007

World’s Fastest Optical chip

IBM has announced Monday an optical chipset that could allow you to download a complete high-definition movie in one second, rather than the 30 minutes or more that today's fastest chips allow.
Measured in another way, the chipset can transmit the equivalent of four million simultaneous telephone conversations. The company said that the optical transceiver can move data at up to eight times more quickly than the fastest existing optical chips -- up to 160 Gbps.
The optical transceiver chipset moves information as light signals, not as electrical signals, and IBM said it could be available by 2010.
the new optical chipset, only one-fifteenth the size of a dime, can be manufactured with high-volume techniques and so could result in low-cost products. They could be integrated into printed circuit boards for PCs or set-top boxes.
More details at

Thursday, 22 February 2007

SQL Server 2005 Service Pack 2 Features

Microsoft has recently released the service pack 2 for SQL Server 2005.The service pack includes the following key enhancements

  • Data Mining Add-ins for the 2007 Microsoft Office system enable data mining functionality from SQL Server Analysis Services (SSAS) to be used directly within Excel® 2007 and Visio® 2007.
  • SQL Server Reporting Services (SSRS) compatibility with Microsoft Office Share Point® Server 2007 provides integration with the Report Center in Share Point, enabling the seamless consumption and management of SSRS reports within Share Point.
  • SQL Server Analysis Services improvements for Excel 2007 and Excel Services relate to performance and functionality.
  • Data compression (varDecimal) is an important feature for data warehousing scenarios, requiring less disk storage of decimal data and increasing overall performance.
  • Manageability enhancements, based on customer feedback, provide management capabilities for database administrators such as improvements in database maintenance plans, enhanced management reports and a new copy database wizard.
  • Management reports added to SQL Server Express Edition enable customers to get insights into the performance of their Express Edition and SQL Server Compact Edition databases.
  • Interoperability improvements including Oracle support in the Report Builder feature enable customers to use its functionality on top of Oracle data sources. Customers also have access to SQL Server Reporting Services to build reports on top of Hyperion’s Essbase cubes.

Thursday, 11 January 2007

Changing Database Collation

The existing Database connection can be changed using the ALTER Database command provided in SQL Server. To change the collation settings
  • Restrict the Database access to Single User Mode
  • Change the Database collation using the following syntax.

    ALTER DATABASE dbname COLLATE [Replace Actual Collation]
This will only change the collation of the Database and not the collation of the Database objects (Tables) if any exists already. We need to explicitly change the collation of the character (VARCHAR / NVARCHAR) columns for each table. To change the collation settings for columns
  • Take the backup of all the Indexes and constraints of the tables with Character columns in the Database
  • Drop all the Indexes and constraints (Primary Key, Foreign Key, Defaults etc)
  • Change the Collation setting for the table columns. The below script can be used to generate the script that identifies the character columns and replace it with the new collation in the Database

    CHAR(13) + ' GO'

Wednesday, 3 January 2007

SQL Server Locking Modes

SQL Server supports acquires different locking modes depends on the type of operation we perform on the Data. The list of locking modes acquired by SQL Sever for each type of operation is given below.

Shared Lock

  • Acquired for reading data.
  • Other transactions can acquire shared lock on the same resources.
  • No other transaction can modify data.

Exclusive Lock

  • Acquired to modify data (INSERT, DELETE and MODIFY).
  • No other transaction can modify or read data.

Update Lock

  • Acquired to execute a data modification operation but first needs to search the table.
  • No other transaction can acquire an update lock or an exclusive lock.

Schema Lock

  • Modification lock - Acquired for DDL queries.
  • Stability lock – Acquired when compiling queries.

Bulk Update Lock

  • Acquired for performing bulk copy of data

Tuesday, 2 January 2007

SQL Server Locking Tips

SQL Server uses locking to support concurrency of data in Multi user environment .The locking can be applied to the Database resources such ROWS, INDEX, PAGE, EXTENT, TABLE or the Database itself. SQL Server automatically escalates row, key, or page locks to table locks as appropriate to protects system resources and increases efficiency. Locking at smaller granularity increases concurrency but create high overhead on the Database on the other hand Locking at larger granularity reduces overhead but expense in terms of concurrency.The best practice is to avoid lock escalation .The following tips can be useful to minimize locking

  • Keep all Transact-SQL transactions as short as possible.
  • Avoid interleaving reads and database changes in same transaction.
  • Do all the conditional logic and variable assignment outside of a Transaction.
  • Encapsulate all transactions within stored procedures.
  • Avoid Transact-SQL statements inside the transaction that affect large numbers of rows at once.
  • Although WHILE nesting transactions is perfectly legal avoid using inside the transactions.
  • For lookup tables consider altering the default lock level for the table (Use SP_INDEXOPTION).
  • Do not create temporary tables from within a stored procedure that is invoked by the INSERT INTO #temp EXECUTE statement.