Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure

Exclusive offer: get 50% off this eBook here
Microsoft SQL Server 2012 Performance Tuning Cookbook

Microsoft SQL Server 2012 Performance Tuning Cookbook — Save 50%

80 recipes to help you tune SQL Server 2012 and achieve optimal performance with this book and ebook

$35.99    $18.00
by Bihag Thaker Ritesh Shah | July 2012 | Cookbooks Enterprise Articles Microsoft

Your database performance heavily depends on how you have physically placed your database objects and how you have configured your disk subsystem. Designing the physical layout of your database correctly is the key factor to improve the performance of your database queries and thus the performance of your database. However, the correct decision on a physical design structure of the database depends on the available hardware resources that you might have. This includes the number of processors, RAM, and storage resources, such as how many ,disks or RAID controllers you might have in your database environment. The best thing while designing physical layout of the database is to have multiple physical disks for your database. If you configure your database in such a way that it spreads across multiple disks, it can benefit from parallel I/O operations.

The following are some of the decisions that influence your database performance:

  • Where do you place data files?
  • Where do you place log files?
  • Where do you place large objects?
  • Where do you place indexes?
  • Where do you place the tempdb database?

You can control the physical location of database objects within the database by using files and filegroups.

In this article by Ritesh Shah and Bihag Thaker, co-authors of Microsoft SQL Server 2012 Performance Tuning Cookbook, we will learn how to best design the physical structure of the database on your disk subsystem when you have enough available hardware resources, such as multiple processors and multiple physical disks.

In this article we will cover:

  • Configuring a data file and log file on multiple physical disks
  • Using files and filegroups
  • Moving an existing large table to a separate physical disk
  • Moving non-clustered indexes to a separate physical disk
  • Configuring the tempdb database on separate physical disk

Configuring data file and log file on multiple physical disks

If you know the exact difference between the ways in which data files and log files of a database are accessed, you can understand why you should place data files and log files on separate physical disks for better performance.

The data file of a database, which is normally a file with a .mdf or .ndf extension, is used to store the actual data in the database. The data is stored in pages that are 8 KB in size. When particular data is queried by the user, SQL Server reads the required data pages from the disk into memory containing the requested data from the data file. In case SQL Server needs to make any modifcation in the existing data, it reads the required data pages into the buffer cache, updates those cached data pages in memory, writes modifications to the log file, when the transaction is committed, and then writes the updated data pages back to the disk, when the checkpoint operation is performed. SQL Server performs configurable checkpoint operations at regular intervals. In-memory modified data pages are called dirty pages. When a checkpoint is performed, it permanently writes these dirty pages on disk.

The log file is used to record any change that is made to the database. It's intended for recovery of the database in case of disaster or failure. Because a log file is intended to record the changes, it is not designed to be read randomly, as compared to a data file. Rather, it is designed to be written and accessed in a sequential manner.

SQL Server is designed to handle and process multiple I/O requests simultaneously, if we have enough hardware resources. Even if SQL Server is capable of handling simultaneous I/O requests in parallel, it may face the issue of disk contention while reading large amounts of data from data files and writing large a number of transaction logs to log files in parallel with two different requests if data files and log files reside on the same physical disk. However, if data file and log file are located on separate physical disks, SQL Server gracefully handles and processes such requests in parallel.

When simultaneous requests for reading data and writing transaction logs are commonly expected in the OLTP database environment, placing data files and log files on separate physical drives greatly improves the performance of the database.

Let's suppose that you are a DBA and, in your organization, you maintain and administer a production database called AdventureWorks2012 database. The database was created/ installed by an inexperienced team and has been residing in the default location for SQL Server. You are required to separate the data files and log files for this database and place them on different physical disks to achieve maximum I/O performance. How would you perform this task?

The goal of this recipe is to teach you how to separate the data files and log files for an existing database to improve the I/O response time and database performance.

Getting ready

This recipe refers to the following physical disk volumes:

  • E drive—to store the data file
  • L drive—to store the log file

In this article, wherever it is said "separate disk volume" or "separate drive", consider it a separate physical drive and not logical partitioned drive.

The following are the prerequisites for completing this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition.
  • Sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install the AdventureWorks2012 database, please refer to the Preface of this book
  • E drive should be available on your machine.
  • L drive should be available on your machine.

How to do it...

The following are the steps you need to perform for this recipe:

  1. Start SQL Server Management Studio and connect to SQL Server.
  2. In the query window, type and execute the following script to verify the existing path for data files and log files for the AdventureWorks2012 database:
  3. --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Examine the current --location of the database. SELECT physical_name FROM sys.database_files GO

  4. Assuming that the AdventureWorks2012 database resides in its default location, depending upon your SQL Server installation path, you may see a result in the output of the previous query, similar to the one given here:
  5. Now, execute the following query to bring the database offline:
  6. USE master GO --Bring database offline ALTER DATABASE AdventureWorks2012 SET OFFLINE WITH ROLLBACK IMMEDIATE GO

  7. Once the database is offline, you can detach it without any problem. Right-click on AdventureWorks2012Object ExplorerTasks and then Detach…, as shown in following screenshot:
  8. This step brings up the Detach Database dialog box, as shown in following screenshot. Press the OK button on this dialog box. This will detach the AdventureWorks2012 database from the SQL Server instance and it will no longer appear in Object Explorer:
  9. Create the following two directories to place data files (.mdf files) and log files (.ldf files), respectively, for the AdventureWorks2012 database, on different physical disks:
    • E:\SQL_Data\
    • L:\SQL_Log\
  10. Now, using Windows Explorer, move the AdventureWorks2012_data.mdf and AdventureWorks2012_log.ldf database files manually from their original location to their respective new directories. The following paths should be the respective destinations:
    • E:\SQL_Data\AdventureWorks2012_Data.mdf
    • L:\SQL_Log\ AdventureWorks2012_Log.ldf
  11. After the data and log files are copied to their new locations, we will attach them and bring our AdventureWorks2012 database back online. To do this, in Object Explorer, right-click on the Databases node and select Attach….
  12. You will see the following Attach Databases dialog box. In this dialog box, click on the Add…> button:
  13. The previous step opens the Locate Database Files dialog box. In this dialog box, locate the .mdf data file E:\SQL_Data\AdventureWorks2012_Data.mdf and click on the OK button, as shown in following screenshot:
  14. After locating the .mdf data file, the Attach Databases dialog box should look similar to the following screenshot. Note that the log file (.ldf file) could not be located at this stage and there is a Not Found message against AdventureWorks2012_log.ldf, under the AdventureWorks2012 database details: section. This happens because we have moved the log file to our new location, L:\SQL_Log\, and SQL Server tries to find it in its default location:
  15. To locate the log file, click on the button in the Current File Path column for the AdventureWorks2012_log.ldf log file. This will bring up the Locate Database Files dialog box. Locate the file L:\SQL_Log\AdventureWorks2012_log.ldf and click on the OK button. Refer to the following screenshot:
  16. To verify the new location of the AdventureWorks2012 database, run the following query in SSMS:
  17. --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Verify the new location of --the database. SELECT physical_name ,name FROM sys.database_files GO

  18. In the query result, examine the new locations of the data files and log files for the AdventureWorks2012 database; see the following screenshot:

How it works...

In this recipe, we first queried the sys.database_files system catalog view to verify the current location of the AdventureWorks2012 database. Because we wanted to move the .mdf and .ldf files to new locations, we had to bring the database offline.

We brought the database offline with the ALTER DATABASE command. Note that, in the ALTER DATABASE command, we included the ROLLBACK IMMEDIATE option. This rolls back the transactions that are not completed, and current connections to AdventureWorks2012 database are closed. After bringing the database offline, we detached the AdventureWorks2012 database from the instance of SQL server.

You cannot move a database file to a new location if the database is online. If a database is to be moved, it must not be in use by SQL Server. In order to move a database, you can either stop the SQL Server service or bring the database offline. Bringing the database offline is a preferable option because stopping SQL Server service stops the functioning of the whole SQL Server instance. Alternatively, you can also select the checkbox Drop Connections in the Detach Database dialog box, which does not require bringing a database offline.

We then created two new directories—E:\SQL_Data\ and L:\SQL_Log\—to place the data and log files for AdventureWorks2012 and moved AdventureWorks2012_Data.mdf and AdventureWorks2012_Log.ldf over there. We then attached the AdventureWorks2012 database by attaching the .mdf and .ldf files from their new locations. Finally, we verifed the new location of the database by querying sys.database_files.

You can script your Attach Database and Detach Database actions by clicking on the Script button in the wizard. This allows you to save and re-use the script for future purposes.

Microsoft SQL Server 2012 Performance Tuning Cookbook 80 recipes to help you tune SQL Server 2012 and achieve optimal performance with this book and ebook
Published: July 2012
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

Using files and filegroups

By placing certain database objects on different physical disks, you can improve the performance of your databases. But, how do we control the placement of certain database objects on particular physical disks?

Well, files and filegroups are used in SQL Server to physically organize your database files and database objects. By organizing data files with the help of filegroups, you can place specifc database objects, such as tables and indexes, on particular physical disks.

Let's say that you are responsible for creating and designing a new production database, which will be accessed by many applications. You expect one particular table to grow very large with time. You are lucky enough to have enough hardware resources in the form of multiple physical disks that you can use to distribute your database physically. You decide to place the table that you estimate will become very large and expect will be accessed heavily by many requests, so that you can achieve the maximum disk I/O performance on the requests made on this table. How would you configure this implementation? Well, to find out, follow this recipe!

Getting ready

In this recipe, you will learn how to configure databases for large objects. To do this, we will create a sample database that we will configure with multiple data fies and filegroups. This recipe requires that you have at least three physical drives available, as this example references following physical disk volumes:

  • E drive—for primary data file (primary filegroup)
  • G drive—for secondary data file (fg_LargeData filegroup)
  • L drive—for log file

The following are the prerequisites to completing this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition
  • E—drive should be available on your machine
  • G—drive should be available on your machine
  • L—drive should be available on your machine

How to do it...

The following steps will describe how to work with files and filegroups, to distribute your database across multiple physical disk drives:

  1. Start SQL Server Management Studio and connect to SQL Server.
  2. In the query window, type and execute the following query to create a new sample database named SampleDB:
  3. --Creating Sample Database CREATE DATABASE SampleDB ON PRIMARY ( Name = SampleDB_Data ,FileName = 'E:\SQL_Data\SampleDB_Data.mdf' ,SIZE = 256MB ,FILEGROWTH = 128MB ,MAXSIZE = 1024GB ) ,FILEGROUP fg_LargeData ( Name = SampleDB_fg_LargeData_Data ,FileName = 'G:\SQL_LargeData\SampleDB_fg_LargeData_Data.ndf' ,SIZE = 256MB ,FILEGROWTH = 128MB ,MAXSIZE = 1024GB ) LOG ON ( Name = SampleDB_Log ,FileName = 'L:\SQL_Log\SampleDB_Log.ldf' ,SIZE = 128MB ,FILEGROWTH = 64MB ,MAXSIZE = 128GB ) GO

  4. Run the following script to create a new sample table named tbl_SmallTable, on the default filegroup, which happens to be a primary filegroup in our case:
  5. USE SampleDB GO --Creating table tbl_SmallTable CREATE TABLE tbl_SmallTable ( ID INT IDENTITY(1,1) PRIMARY KEY ,ObjectID INT ,ColumnID INT ,ColumnName sysname ) GO --Inserting sample data into tbl_SmallTable INSERT INTO tbl_SmallTable SELECT object_id ,column_id ,name FROM sys.all_columns AS AC1 GO

  6. Execute the following script to create another sample table named tbl_LargeTable, on filegroup fg_LargeTable, which we anticipate will become very large:
  7. USE SampleDB GO --Creating table tbl_LargeTable CREATE TABLE tbl_LargeTable ( ID INT IDENTITY(1,1) PRIMARY KEY ,ObjectID INT ,ColumnID INT ,ColumnName sysname ) ON [fg_LargeData] GO --Inserting sample data into tbl_LargeTable INSERT INTO tbl_LargeTable SELECT AC1.object_id ,AC1.column_id ,AC1.name FROM sys.all_columns AS AC1 CROSS JOIN sys.all_columns AS AC2 GO

  8. Now, to verify the location of both the tables created with the previous script, execute the following query:
  9. USE SampleDB GO --Verifying the location of the tables. SELECT OBJECT_NAME(I.object_id) AS TableName ,FG.name AS FileGroupName ,DF.physical_name AS DataFilePath FROM sys.indexes AS I INNER JOIN sys.tables AS T ON I.object_id = T.object_id INNER JOIN sys.filegroups AS FG ON I.data_space_id = FG.data_space_id INNER JOIN sys.database_files AS DF ON I.data_space_id = DF.data_space_id WHERE I.index_id <= 1 GO

  10. You will observe that, as specifed in our database definition, two tables are created on different disks. Note the filegroup and physical path of database file where the table data for both the tables will be stored in the following output o the previous query:

How it works...

We created a SampleDB database with a CREATE DATABASE statement. We created the database definition such that its primary data file is stored in E:\SQL_Data\ and log file is stored in L:\SQL_Log. In the database defnition, we also specifed a new filegroup called fg_Largedata and added a secondary data file named SampleDB_fg_LargeData_Data.ndf to this filegroup.

Remember that a log file is never associated with a filegroup. In other words, you cannot specify a filegroup name in a log file definition.

In the provided script, we created two tables. Table tbl_SmallTable will be created on the primary filegroup, because we did not specify any filegroup in the table definition and primary filegroup is the default filegroup in our case. We specified the fg_LargeTable filegroup with the ON clause in the table definition of tbl_LargeTable, so that it gets stored on its separate physical drive at G:\SQL_LargeData.

If we do not specify the target filegroup name while creating an object, it is always created in the default filegroup. By default, primary filegroup is the default filegroup. A user-defned filegroup can be set as the default filegroup with the ALTER DATABASE …MODIFY FILEGROUP statement.

We then verify the location of our tables by executing a query that makes use of several joins. We fetch details from sys.indexes, sys.tables, sys.filegroups, sys. database_files. We join sys.indexes and sys.tables on the object_id column. data_space_id is the ID of the filegroup in sys.indexes, sys.filegroups, and sys. database_files; then we join these system views, based on data_space_id. We retrieve table name by using the OBJECT_NAME() function, name of the filegroup from the sys. filegroups system view, and path of the data files from the sys.database_files system view. Remember that for a table that is on heap, the value of index_id is always 0. index_ id for a clustered index is always 1. Clustered index means that the data of the table itself is attached to it. This is the reason why we have specifed the condition I.index_id <= 1, so that we get details only for a clustered table or a heap table. Any non-clustered index entries are filtered. Note that as we have not partitioned the table, we have not included the sys. partitions system catalog view in our query. In case the table was partitioned and you had wanted to return a list of locations for each partition on the table, you would also have needed to join sys.partitions in the query, to retrieve the partition-specifc location information.

Moving the existing large table to separate physical disk

In the previous recipe, Using Files and Filegroups, we saw that we can create a filegroup and create a table that is expected to become large and place it on different physical disks using filegroup.

But what if there is already an existing large table in an existing database that is extensively used by queries? Let's say that you are responsible for the AdventureWorks2012 database in your production environment, and there is one large table named Sales. SalesOrderDetail, which is located on the primary filegroup. You observe that the table is very large, I/O operations with a large volume of data made on this table are taking more time to be completed causing blocking issues, and other transactions have to wait for I/O operations on the same resources, resulting in bad I/O response time. You realise that there is a need to move this large table (Sales.SalesOrderDetail) containing billions of rows onto a dedicated physical disk to improve the I/O response time. How would you achieve this task of moving a large table to another disk?

In this recipe, you will learn how to move an existing large table to a different physical disk.

Placing two large tables used frequently in join queries on two different physical disks can also help in improving performance by allowing SQL Server to perform parallel read operations on two tables specifed join queries.

Getting ready

This recipe refers to the F: drive to place SalesOrderDetails data.

The following are the prerequisites to completing this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition.
  • A sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install AdventureWorks2012 database, please refer to the Preface of this book.
  • F drive should be available on your machine.

How to do it...

The following are the steps that will describe how to move a large table to a different physical disk:

  1. Start SQL Server Management Studio and connect to SQL Server.
  2. In the query window, type and execute the following T-SQL commands to add a new filegroup and data file to the AdventureWorks2012 database:
  3. --Adding new filegroup --named fg_SalesOrderDetails ALTER DATABASE AdventureWorks2012 ADD FILEGROUP fg_SalesOrderDetails GO --Adding new data file to new --filegroup fg_SalesOrderDetails ALTER DATABASE AdventureWorks2012 ADD FILE ( Name = fg_SalesOrderDetails_Data ,FileName = 'F:\SalesOrderDetails_Data\fg_SalesOrderDetails_ Data.ndf' ,SIZE = 512MB ,FILEGROWTH = 128MB ,MAXSIZE = 512GB ) TO FILEGROUP [fg_SalesOrderDetails] GO

  4. Verify the current location of the table Sales.SalesOrderDetail, by executing the following query:

    USE AdventureWorks2012 GO --Verifying the current physical location --of table Sales.SalesOrderDetail SELECT OBJECT_NAME(I.object_id) AS TableName ,FG.name AS FileGroupName ,DF.physical_name AS DataFilePath FROM sys.indexes AS I INNER JOIN sys.tables AS T ON I.object_id = T.object_id INNER JOIN sys.filegroups AS FG ON I.data_space_id = FG.data_space_id INNER JOIN sys.database_files AS DF ON I.data_space_id = DF.data_space_id WHERE I.index_id <= 1 AND I.object_id = OBJECT_ID('Sales. SalesOrderDetail') GO

    If AdventureWorks2012 is created at its default location; depending upon your SQL Server installation path, you should get output similar to that shown in the following screenshot:

  5. Run the following script, which will drop the existing clustered index (clustered primary key) and create it on a new filegroup, fg_SalesOrderDetails:
  6. USE AdventureWorks2012 GO --Dropping existing clustered primary key --constraint (Clustered Index) from the table. ALTER TABLE [Sales].[SalesOrderDetail] DROP CONSTRAINT [PK_SalesOrderDetail_SalesOrderID_ SalesOrderDetailID] GO --Adding clustered primary key constraint --(Clustered Index) on filegroup fg_SalesOrderDetails ALTER TABLE [Sales].[SalesOrderDetail] ADD CONSTRAINT [PK_SalesOrderDetail_SalesOrderID_ SalesOrderDetailID] PRIMARY KEY CLUSTERED ( [SalesOrderID] ASC, [SalesOrderDetailID] ASC ) ON [fg_SalesOrderDetails] GO

  7. Now, verify the new location of the table Sales.SalesOrderDetail, by running the following query:
  8. USE AdventureWorks2012 GO --Verifying new physical location --of table Sales.SalesOrderDetail SELECT OBJECT_NAME(I.object_id) AS TableName ,FG.name AS FileGroupName ,DF.physical_name AS DataFilePath FROM sys.indexes AS I INNER JOIN sys.tables AS T ON I.object_id = T.object_id INNER JOIN sys.filegroups AS FG ON I.data_space_id = FG.data_space_id INNER JOIN sys.database_files AS DF ON I.data_space_id = DF.data_space_id WHERE I.index_id <= 1 AND I.object_id = OBJECT_ID('Sales. SalesOrderDetail') GO

The following screenshot is the output of the previous query after moving the Sales. SalesOrderDetail table to a new location:

How it works...

In order to place Sales.SalesOrderDetail on separate physical disk, we created a new filegroup, fg_SalesOrderDetails, in the AdventureWorks2012 database and added a new data file, fg_SalesOrderDetails_Data.ndf, to this filegroup. After creating a new filegroup and data file, we observed the current location of the Sales.SalesOrderDetail table with a query similar to what we had used in the previous recipe.

We then executed a script that drops the existing clustered index from the Sales.SalesOrderDetail table with an ALTER TABLE statement. Because it's a clustered prim key, we needed to drop the clustered primary key constraint instead of dropping the index, as dropping the clustered primary key constraint automatically drops its associated cluster index. After dropping the clustered primary key constraint, we recreated it with the ON [fg SalesOrderDetails] filegroup option, by using ALTER TABLE. Creating the clustered index on the fg_SalesOrderDetails filegroup moves all data pages of the Sales. SalesOrderDetail table to F drive.

Finally, we executed the query to verify the new location of Sales.SalesOrderDetail.

Microsoft SQL Server 2012 Performance Tuning Cookbook 80 recipes to help you tune SQL Server 2012 and achieve optimal performance with this book and ebook
Published: July 2012
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

Moving non-clustered indexes on separate physical disk

If you have few large tables and some non-clustered indexes on these tables, which are frequently used in queries, you can consider placing the non-clustered indexes on a separate physical drive. By having non-clustered indexes on a separate physical disk, SQL Server can perform bookmark lookups in parallel and can simultaneously read data pages and index pages. This parallelism improves the performance of queries.

In this recipe, we will move all non-clustered indexes of table Sales.SalesOrderDetail to a separate physical disk (the table Sales.SalesOrderDetail that we moved to F: drive in the previous recipe, Moving Existing Large Table to Separate Physical Disk).

Getting ready

This example refers the I drive to place non-clustered indexes.

The following are the pre-requisites to completing this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition.
  • Sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install AdventureWorks2012, please refer to the Preface of this book.
  • I drive should be available on your machine.

How to do it...

The following are the steps that will describe how to move non-clustered indexes to a separate physical disk:

  1. Start SQL Server Management Studio and connect to SQL Server.
  2. In the query window, type and execute the following T-SQL commands to add a new filegroup fg_Indexes, and data file fg_Indexes_Data.ndf, to the AdventureWorks2012 database:
  3. --Adding new filegroup --named fg_Indexes ALTER DATABASE AdventureWorks2012 ADD FILEGROUP fg_Indexes GO --Adding new data file to new --filegroup fg_Indexes ALTER DATABASE AdventureWorks2012 ADD FILE ( Name = fg_Indexes_Data ,FileName = 'I:\SQLIndex_Data\fg_Indexes_Data.ndf' ,SIZE = 512MB ,FILEGROWTH = 128MB ,MAXSIZE = 256GB ) TO FILEGROUP [fg_Indexes] GO

  4. Execute the following query to examine the current location of non-clustered indexes:
  5. USE AdventureWorks2012 GO --Verifying the current physical location of --nonclustered indexes on table Sales.SalesOrderDetail SELECT OBJECT_NAME(I.object_id) AS TableName ,I.name AS IndexName ,FG.name AS FileGroupName ,DF.physical_name AS DataFilePath FROM sys.indexes AS I INNER JOIN sys.tables AS T ON I.object_id = T.object_id INNER JOIN sys.filegroups AS FG ON I.data_space_id = FG.data_space_id INNER JOIN sys.database_files AS DF ON I.data_space_id = DF.data_space_id WHERE I.object_id = OBJECT_ID('Sales.SalesOrderDetail') GO

  6. If the AdventureWorks2012 database is installed at its current location, depending upon your SQL Server installation path, you will see a result set similar to the one shown in the following screenshot:
  7. Now, the following query will drop and recreate the non-clustered indexes on a new filegroup, fg_indexes:
  8. USE AdventureWorks2012 GO --Dropping and re-creating nonclustered --index on filegroup fg_Indexes CREATE NONCLUSTERED INDEX [IX_SalesOrderDetail_ProductID] ON [Sales].[SalesOrderDetail] ( [ProductID] ASC ) WITH (DROP_EXISTING = ON) ON [fg_Indexes] GO --Dropping and re-creating nonclustered --index on filegroup fg_Indexes CREATE UNIQUE NONCLUSTERED INDEX [AK_SalesOrderDetail_rowguid] ON [Sales].[SalesOrderDetail] ( [rowguid] ASC ) WITH (DROP_EXISTING = ON) ON [fg_Indexes] GO

  9. To verify the new location of indexes, run the following query:
  10. USE AdventureWorks2012 GO --Verifying the new physical location of --nonclustered indexes on table Sales.SalesOrderDetail SELECT OBJECT_NAME(I.object_id) AS TableName ,I.name AS IndexName ,FG.name AS FileGroupName ,DF.physical_name AS DataFilePath FROM sys.indexes AS I INNER JOIN sys.tables AS T ON I.object_id = T.object_id INNER JOIN sys.filegroups AS FG ON I.data_space_id = FG.data_space_id INNER JOIN sys.database_files AS DF ON I.data_space_id = DF.data_space_id WHERE I.object_id = OBJECT_ID('Sales.SalesOrderDetail') GO

The following is the screenshot of the result after moving the non-clustered index onto table Sales.SalesOrderDetail:

How it works...

In order to place non-clustered indexes on a separate physical disk, we created a new filegroup fg_Indexes in the AdventureWorks2012 database and added a new data file, fg_Indexes_Data.ndf, to this filegroup. After creating a new filegroup and data file, we observed the current location of all indexes specifed on the Sales.SalesOrderDetail table with a query similar to what we had used in the previous recipe. However, we did not put any flter on index_id this time, as we wanted to return rows for every index.

We then executed a script that drops the existing non-clustered indexes, IX_ SalesOrderDetail_ProductID and AK_SalesOrderDetail_rowguid, and recreated them on the Sales.SalesOrderDetail table with the CREATE INDEX statement. Note the inclusion of the index option DROP_EXISTING = ON. This drops the existing index with the same name before it creates a new one. Also note that we created two non-clustered indexes, IX_SalesOrderDetail_ProductID and AK_SalesOrderDetail_rowguid, with the ON [fg_Indexes] filegroup option. This places index data in our new location.

Finally, we executed the query to verify the new location of all indexes specifed on the Sales.SalesOrderDetail table.

Configuring the tempdb database on separate physical disk

The tempdb database is one of the system databases of SQL Server that is essential for its normal functioning. SQL Server relies on the tempdb database to perform many of its operations and stores internal objects in this database. The following are some of the operations for which the tempdb database is used by SQL Server:

  • Performing grouping or sorting operations in queries
  • Cursor operations
  • Version store operation
  • Online index creation
  • Storing intermediate results in worktables
  • Storing user objects, such as local or global temporary tables and table variable data

The tempdb database is the central database for all the databases and applications per the SQL Server instance. Therefore, if many database applications are using tempdb extensively, the performance of the tempdb database is very crucial for the overall performance of the SQL Server instance. If tempdb resides on the same disk, which is also used by the other application databases, it is possible to have poor I/O response time depending upon the number of read/write operations being performed on other databases and the usage of the tempdb database by various applications.

This is the reason why, the tempdb database is configured on a separate physical disk, on production database servers where the tempdb database is used extensively, to get the best I/O performance. It is also advisable to add more data files to the tempdb database.

The goal of this recipe is to teach you how you can configure and move your tempdb database, so that its database files reside on their dedicated physical drives. Configuring the tempdb database on its separate physical drives reduces the I/O load on the disk where the application database resides. Placing the tempdb database on different disks also increases the chances of the SQL Server's performing parallel read/write operations. This improves the performance of the database server.

Getting ready

This example refers following two physical disk volumes as new file locations to place the tempdb database files:

  • M drive—for data files (.mdf files) of the tempdb database
  • N drive—for log files (.ldf files) of the tempdb database

The following are the prerequisites to completing this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition.
  • Sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install the AdventureWorks2012 database, please refer to the Preface of this book.
  • M drive should be available on your machine.
  • N drive should be available on your machine.

How to do it...

The following are the steps that will describe how to move a tempdb database to a new location.

  1. Start SQL Server Management Studio and connect to SQL Server.
  2. In the query window, type and execute the following query to verify the current physical location of the tempdb database:
  3. --Switching database context to tempdb USE tempdb GO --Examining curent physical location --of tempdb database SELECT name AS LogicalFileName ,physical_name AS PhysicalFilePath FROM sys.database_files GO

  4. If the tempdb database has been created at its default location and you have not changed it yet, you will see the physical location of database files of the tempdb database; depending on your SQL Server installation path, it will be similar to the one shown in following screenshot:
  5. Create the following directories on the specified disks to place data files and log files of tempdb on separate physical drives:
    • M:\TempDB_Data\
    • N:\TempDB_Log\
  6. Once the these directories are created, execute the following script to modify the location of the tempdb database for your SQL Server instance:
  7. USE master GO --Changing the location of data file --(.mdf file) of tempdb database. ALTER DATABASE tempdb MODIFY FILE ( Name = tempdev ,FileName = 'M:\TempDB_Data\tempdb.mdf' ) GO --Changing the location of log file --(.ldf file) of tempdb database. ALTER DATABASE tempdb MODIFY FILE ( Name = templog ,FileName = 'N:\TempDB_Log\templog.ldf' ) GO

  8. After executing the previous commands, you will need to restart SQL Server service. For this, select SQL Server Configuration Manager from the Configuration Tools option in the Microsoft SQL Server 2012 program group in the Start menu.
  9. In SQL Server Confguration Manager, select SQL Server Services from the left pane. Right-click on the name of a SQL Server service for your SQL Server instance, and then select Restart to restart the service. Refer to the following screenshot for more details:
  10. Always use SQL Server Confguration Manager to restart the SQL Server Service. Do not restart SQL Server service directly from the Windows Services MMC.

  11. Once the SQL Server service is restarted, the tempdb database can be located at a new specifed location. To verify that the tempdb database is now at its new location, run the following query:
  12. --Switching database context to tempdb USE tempdb GO --Examining new physical location --of tempdb database SELECT name AS LogicalFileName ,physical_name AS PhysicalFilePath FROM sys.database_files GO

  13. After executing the previous query, you should see a result similar to the one shown in following screenshot:

How it works...

We first retrieved the current location of the tempdb database by querying the sys. database_files system catalog view. We then altered the locations of the data file and log file of the tempdb database using the ALTER DATABASE command, so that the data file and log file are stored on different physical drives, M:\TempDB_Data\tempdb.mdf and N:\TempDB_Log\templog.ldf, respectively, after SQL Server restarts. We then restarted SQL Server service through SQL Server Configuration Manager. Finally, we queried the sys.database_files system catalog view again, to verify the changes in location of the tempdb database.



About the Author :


Bihag Thaker

Bihag Thaker is an SQL Server enthusiast, an MCTS (SQL Server 2005) and MCITP (SQL Server 2008) who has been working on SQL Server technology for the last few years. Initially he was into .Net technology, but his keen interest in SQL Server led him to be a database specialist.

He is currently working as a database administrator. He has worked on numerous performance tuning assignments and executed large scale database migrations. He likes to share his knowledge and enjoys helping the SQL Server community. You will find him talking about SQL Server on his blog MsSQLBlog.com.

Ritesh Shah

Ritesh Shah is a data professional and has 10+ years of experience in Microsoft technology from SQL Server 2000 to the latest one and has worked from Visual Basic 6.0 to .NET Framework 4.0. He has deployed many medium scale as well as large scale projects using Microsoft technology.

He shares his knowledge in his blog SQLHub.com and also helps the community in different portals like BeyondRelational.com, Experts-Exchange.com and Asp.Net forums.

Books From Packt


Microsoft SQL Server 2012 Integration Services: An Expert Cookbook
Microsoft SQL Server 2012 Integration Services: An Expert Cookbook

Microsoft SQL Server 2012 Security Cookbook
Microsoft SQL Server 2012 Security Cookbook

SQL Server 2012 with PowerShell V3 Cookbook
SQL Server 2012 with PowerShell V3 Cookbook

SQL Server 2012 Professional Tips and Tricks
SQL Server 2012 Professional Tips and Tricks

(MCTS): Microsoft BizTalk Server 2010 (70-595) Certification Guide
(MCTS): Microsoft BizTalk Server 2010 (70-595) Certification Guide

Microsoft SQL Server 2008 R2 Master Data Services
Microsoft SQL Server 2008 R2 Master Data Services

Microsoft SQL Server 2008 R2 Administration Cookbook
Microsoft SQL Server 2008 R2 Administration Cookbook

Small Business Server 2008 – Installation, Migration, and Configuration
Small Business Server 2008 – Installation, Migration, and Configuration


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
b
n
t
T
k
n
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software