Best Practices for Microsoft SQL Server 2008 R2 Administration


Microsoft SQL Server 2008 R2 Administration Cookbook

Microsoft SQL Server 2008 R2 Administration Cookbook

Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook

        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

The reader would benefit by refering to the previous article on Managing the Core Database Engine since the following recipes are related to it.

Implementing Utility & Non-utility collection sets

The Utility information data collection set is installed and automatically started on each instance of SQL Server 2008 R2 when you complete the Utility Control Point (UCP) as we have seen in the previous article. The data is stored in the UMDW database, which is created during the UCP creation. The SQL Server utility collection set is supported side-by-side with Utility collection sets and non-SQL Server utility collection sets. In this recipe, we will go through the implementation tasks to set up the UCP data collection sets for utility and non-utility categories.

SQL Server 2008 R2 introduces the Utility Control Point (UCP) with a set of pre-defined utility collection sets that are managed by UMDW. Similarly, SQL Server 2008 manages the data collection to monitor CPU, disk, and memory resources of an instance using a Data Collector that is managed by Management Data Warehouse (MDW). For this recipe, it is necessary to introduce the MDW feature that stands as a non-utility collection set. The Management Data Warehouse is a relational database that contains all the data that is retained. This database can be on the same system as the data collector, or it can be on another computer. The MDW collection set is run in one of the following collection and upload modes:

  • Non-cached mode: Data collection and upload are on the same schedule. The packages start, collect, and upload data at their configured frequency, and run until they are finished. After the packages finish, they are unloaded from memory.
  • Cached mode: Data collection and upload are on different schedules. The packages collect and cache data until they receive a signal to exit from a loop control-flow task. This ensures that the data flow can be executed repeatedly, which enables continuous data collection.

Getting ready

The new feature of SQL Server 2008 R2—Utility Control Point (UCP)—allows DBAs to set up and collect the utility collection sets. Once the instances are enrolled, the default capacity policies of utilization across the instances or applications are set. It is essential to check that you are using a SQL Server 2008 R2 instance to register the UCP to design the multi-server management feature.

How to do it...

Using SQL Server Management Studio, these are the steps to implement the utility and nonutility data collection sets:

  1. To implement the utility data collection sets, connect to the Utility Explorer where the UCP is registered.
  2. Right-click on Managed Instances and choose Enroll instance (refer to the next screenshot).
  3. Specify the instance name of SQL Server to enroll.
  4. Specify the service account to run the utility collection set.

    To specify the account to collect data, you can choose SQL Server Agent service account, but for security precautions, it is recommended to propose a new account or existing domain user account with the required privileges.

  5. Review prerequisite validation results and selections.
  6. Enroll the instance.

  7. After completing the Enroll Instance wizard, click on the Managed Instances node in the Utility Explorer navigation pane.
  8. On the right-hand side of the Utility Explorer content pane, the enrolled SQL Server instances are displayed.
  9. Next, to implement the non-utility collection sets, from the SSMS tool, use the Configure Management Data Warehouse wizard to configure storage for collected data.
  10. Create the management data warehouse. You can install the management data warehouse on the same instance of SQL Server that runs the data collector for the utility collection set.
  11. Select the configuration task to install the predefined System Data collection sets.
  12. Configure the MDW storage by selecting the SQL Server instance to host and collect the non-utility collection sets.
  13. Map logins to management data warehouse roles.
  14. Once you have completed the MDW wizard, the data collection information for utility and non-utility collection sets are displayed under the Management folder, as shown in the next screenshot:

    Microsoft SQL Server 2008 R2 Administration tutorial

  15. Before we proceed to enable the data collection, it is essential to restart and upload the non-utility collection sets to the Data Collection.
  16. To upload and pass a validation of non-utility collection sets, execute the following TSQL from Query Editor:

    execmsdb.dbo.sp_syscollector_set_warehouse_database_name NULL

    execmsdb.dbo.sp_syscollector_set_warehouse_instance_name NULL

  17. Under the Management folder, right-click on Data Collection and choose Enable the data collector from SSMS, which is shown in the following screenshot:

    Microsoft SQL Server 2008 R2 Administration tutorial

  18. Once we have completed the MDW wizard, the data collection information will be stored in the data warehouse databases.

To ensure that both the utility collection sets exist, review the Data Collection option from SSMS, as shown in the preceding screenshot, which completes the process as a successful implementation of utility and non-utility collection sets on the same instance.

How it works...

The utility data collection sets are installed and automatically started on each instance of SQL Server 2008 R2 when they are configured using Utility Control Point. The UMDW database is created on the instance where UCP is configured and the following collection set and items are stored:

  • Utility Information—DAC Information
  • Utility Information—SMO Information
  • Utility Information—Utility Allocated CPU Info
  • Utility Information—Utility CPU-Memory Related Info
  • Utility Information—Utility Database FilesInfo
  • Utility Information—Utility Performance Counters Items
  • Utility Information—Utility Performance Counters Items1
  • Utility Information—Utility Volumes Information

The non-utility data collection sets are installed when MDW wizard is completed, but not started until they are enabled. The required schemas and their objects for the pre-defined system collect sets are created when MDW is configured. The various UCP and MDW jobs are created under SQL Server Agent | Jobs folder as follows:

  • collection_set_1_noncached_collect_and_upload
  • collection_set_2_collection
  • collection_set_2_upload
  • collection_set_3_collection
  • collection_set_3_upload
  • collection_set_4_noncached_collect_and_upload
  • mdw_purge_data_[MDW]
  • sysutility_get_cache_tables_data_into_aggregate_tables_daily
  • sysutility_get_views_data_into_cache_tables
  • sysutility_mi_collect_performance
  • sysutility_get_cache_tables_data_into_aggregate_tables_hourly
  • syspolicy_purge_history
  • sysutility_mi_collect_and_upload
  • mdw_purge_data_[sysutility_mdw]

The core schema is prefixed by 'core', which describes the tables, stored procedures, and views that are used to manage and identify the collected data. These objects are locked and can only be modified by the owner of the MDW database.

The parallel management of SQL Server Utility collection sets (utility and non-utility) requires a preparation on the instance where UCP information is stored and the best practice is to customize the data-collection frequency to avoid any overlap with the MDW data collection schedule. The data collection store contains server activity for all the instances that are configured to manage and host the operating system, such as percent CPU, memory usage, disk I/O usage, network usage, SQL Server waits, and SQL Server activity.

Designing and refreshing a Scalable Shared database

Designing a Scalable Shared Database (SSD) feature in SQL Server 2008 R2, allows the DBAs to scale out a read-only database (reporting database), which is a copy of a production database, built exclusively for reporting purposes. SSD feature has been part of SQL Server from 2005 Enterprise Edition onwards, that has been enhanced since SQL Server 2008 and this is supported in Enterprise edition and Data Center editions only. To host this reporting database, the disk volumes must be dedicated and read-only, and the scalable shared database feature will permit the smooth update process from production database to the reporting database.

The internals behind such a process of building or refreshing a reporting database are known as the build phase or refresh phase, depending on whether a new reporting database is being built or a stale reporting database is being refreshed. The validity of a scalable shared database begins from building a reporting database on a set of reporting volumes and that reporting data eventually becomes too outdated to be useful, which means that the stale database requires a data-refresh as part of each update cycle. Refreshing a stale reporting database involves either updating its data or building a completely new, fresh version of the database. This scalability feature is supported in Enterprise Edition and Data Center editions only. This recipe will cover how to design and refresh a reporting database that is intended for use as a scalable shared database.

Getting ready

Keeping the reporting database refreshed is a prerequisite as part of each update cycle. The key aspect of having an updated reporting database can be achieved by using the data-copy method, which requires the following:

  • Create or copy a database by designing a SSIS package to use. Execute SQL Task method or Transfer Database task method.
  • From SSMS, use SQL Server Import and Export wizard to copy required objects for the reporting purpose.
  • Restore a backup of the production database into the reporting volume, which will involve a full database backup file to be used.
  • The essential components such as, SAN storage hardware, processing environment, and data access environment must be used. The reporting database must have the same layout as the production database, so we need to use the same drive letter for the reporting volume and the same directory path for the database.

Additionally, verify that the reporting servers and the associated reporting database are running on identical platforms.

How to do it...

To design and refresh a reporting database, you will need to complete the following steps on the production SQL Server instance:

  1. Unmask the Logical Unit Number (LUN) on the disks where the Production database is stored. (Refer to the hardware vendor's manual).
  2. Mount each reporting volume and mark it as read-write.
  3. Obtain the disk volume information. Logon remotely to the server and open a command prompt window to run the following:

    DiskPart list volumes

  4. Use the DiskPart utility to mount the volumes, then on that command prompt window run the following:


  5. The DiskPart utility will open a prompt for you to enter the following commands:

    DISKPART> select volume=<drive-number>

    DISKPART> assign letter=<drive-letter>

    DISKPART> attribute clear readonly

    DISKPART> exit

    The <drive-number> is the volume number assigned by the Windows operating system.
    The <drive-letter> is the letter assigned to the reporting volume.

  6. To ensure that data files are accessible and disks are correctly mounted, list the contents of the directory using the following command from the command prompt:

    DIR <drive-letter>:\<database directory>

  7. As we are refreshing an existing reporting database, attach the database to that server instance using SSMS. On Query Editor, enter the following TSQL statements:

    ALTER DATABASE AdventureWorks2008R2

  8. Detach the database from that server instance using the sp_detach_db statement from Query Editor.
  9. Now, we have to mark each reporting volume as read-only and dismount from the server. Go to the command prompt window and enter the following commands:

    DISKPART> select volume=<drive-number>
    DISKPART> attribute set readonly
    DISKPART> remove
    DISKPART> exit

  10. To ensure that the reporting volume is read-only, you should attempt to create a file on the volume. This attempt must return an error.
  11. Next, go to the command prompt window and enter the following commands:

    DISKPART> select volume=<drive-number>
    DISKPART> assign letter = <drive letter>
    DISKPART> exit

    The <drive-letter> is the letter assigned to the reporting volume.

  12. Attach the database to one or more server instances on each of the reporting servers using the sp_attach_db statement or SSMS tool.
  13. Now, the reporting database is made available as a scalable shared database to process the queries from the application.

How it works...

Using the available hardware vendor-specific servers and disk volumes, the scalable shared database features allow the application to scale out a read-only database built exclusively for reporting purposes.

The 'build' phase is the process of mounting the reporting volume on the production server and building the reporting database. After the reporting database is built on the volume, using the defined data-copy methods, the data is updated. Once it is completed, the process of setting each reporting volume to read-only and dismount begins.

The 'attach' phase is the process of making the reporting database available as a scalable shared database. After the reporting database is built on a set of reporting volumes, the volumes are marked as read-only and mounted across multiple reporting servers. The individual reporting server service instance will use the reporting database that is attached.

There's more...

The Scalable Shared Database feature's best practice recommendation:

  • On the basis of hardware, there is no limit on the number of server instances per database; however, for the shared database configuration, ensure that a maximum of eight servers per database are hosted.
  • The SQL Server instance collation and sort order must be similar across all the instances.
  • If the relational or reporting database is spread across the shared servers, then ensure to test and deploy a synchronized update then a rolling update of the scalable shared database.
  • Also, scaling out this solution is possible in SQL Server 2008 Analysis Services with the Read-Only Database capability.


        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

Managing SQL Server Replication processes

Whenever we talk about managing SQL Server replication processes, the best practice is to configure the replication using scripts for better control, using the Replication along with other high-availability features and Dynamic Management Views to obtain a glimpse of system resource usage. In this recipe, we will look at the three processes that are important when managing the SQL Server replication environment effectively.

Scripting the replication process helps to retain the steps to install replication in case you experience a disaster recovery process. The replication script process can be achieved by using replication wizard dialog boxes from SQL Server Management Studio, by using Replication Management Objects (RMO) concepts or by obtaining information using TSQL statements.

In this recipe, we will work on TSQL methods to manage the SQL Server replication process. Using TSQL methods, gives you more control over the underlying objects generated by the replication services.

How to do it...

To begin the process of managing the SQL Server replication processes, follow these steps:

  1. Assuming that you have set up the Distributor and Publisher servers, a script can be executed using Query Editor in SSMS.
  2. To capture relevant information about the publisher name, which is used in other scripts below, ensure that you use a variable:

    DECLARE @publisher_namesysname
    set @publisher_name =(select name from msdb..MSdistpublishers
    SELECT @publisher_name;

  3. Drop the publisher, local distributor, and distribution database, by entering the following code:

    --Are there other Publishers using this Distributor?
    execsp_dropdistpublisher @publisher=@publisher_name,@no_checks=0;
    declare @distributiondb as sysname;
    set @distributiondb =(select name from sys.sysdatabases where name

    execsp_dropdistributiondb @distributiondb;


  4. Now configure the distributor server and publisher server configuration using the following TSQL script:

    Use master
    /* Obtain default instance name as the Distributor*/
    declare @distributor as sysname = convert(sysname,
    /*Set the @distributorserver_msg variable */
    declare @distributorserver_msg as varchar(50)='The name of the
    Distributor server.';
    SELECT @distributorserver_msg + ' ' +@distributor;
    /* Add the Distributor */
    execsp_adddistributor @distributor=@distributor;
    /* Install the distribution database on the default directory to
    use Windows authentication */
    declare @distributiondb as sysname ='distribution';

  5. Let us change the history retention period and maximum retention period:

    -- Change the history retention period to 24 hours and the
    -- maximum retention period to 48 hours.
    USE distribution
    EXEC sp_changedistributiondb @distributionDB, N'history_
    retention', 24
    EXEC sp_changedistributiondb @distributionDB, N'max_
    distretention', 48

  6. All the steps are using the script method process, which is similar to how we use a Replication wizard from SSMS. The representation of step 4 is shown in the following screenshot:

    Microsoft SQL Server 2008 R2 Administration tutorial

  7. Next, we must set the security_mode to configure on the publisher server:

    DECLARE @distributor as sysname= convert (sysname,serverproperty('
    DECLARE @publisher as sysname;
    DECLARE @publisherserver_msg as varchar (50)= 'PublisherServer';
    /*Set the Publisher server. Note that both publisher &
    distributor will reside on same server */
    SET @publisher =@distributor;
    SELECT @publisherserver_msg +' '+ @publisher;

    USE distributionDB

    DECLARE @distributiondb as sysname='distribution';
    EXEC sp_adddistpublisher @publisher, @security_mode=1,
    @publisher_type = 'MSSQLSERVER';

  8. Let us change the history retention period and maximum retention period:

    -- Change the history retention period to 24 hours and the
    -- maximum retention period to 48 hours.
    USE distribution
    EXEC sp_changedistributiondb @distributionDB, N'history_
    retention', 24
    EXEC sp_changedistributiondb @distributionDB, N'max_
    distretention', 48

  9. From SSMS, navigate to the Replication folder and right-click to choose Launch replication monitor.
  10. Replication monitor displays Publisher groups, publisher, and publications. In order to view such data, we should add a publisher, which is shown in the following screenshot:

  11. The ability to monitor the replication topology is restricted to SYSADMIN fixed server role member in addition to the db_owner fixed database role.
  12. Let us add a user to the replmonitor fixed database role on the distributor, from Query Editor by executing the following TSQL statements:

    CREATEUSER [user_name] FORLOGIN [user_name] WITH
    EXEC sp_addrolememberN'db_datareader',N'user_name'
    EXEC sp_addrolememberN'db_denydatawriter',N'user_name'
    USE [distribution]
    EXEC sp_addrolememberN'replmonitor',N'user_name'
    USE [distribution]
    ALTER AUTHORIZATION ON SCHEMA::[db_owner] TO [user_name]
    EXEC sp_addrolememberN'db_owner',N'user_name'

  13. From the previous TSQL statements, the information is now added, which is visible on the left pane of replication monitor (see the next screenshot):

  14. The Replication Monitor displays information about the publisher on three tables: Publications tab, Subscription watch list, and agents tab.
  15. The publications tab provides summary information for all publications at the publisher. The subscription watch list is intended to display subscriptions from all publications available at the selected publisher.
  16. Further to the replication monitor, the TSQL method also provides the DMVs with the ability to obtain information on the replication process latency. The useful DMVs to manage the replication process information are as follows:
    • sys.dm_repl_articles
    • sys.dm_repl_schemas
    • sys.dm_repl_traninfo
    • sys.dm_repl_tranhash
  17. Finally, the agents tab delivers the status of each replication agent and job that are used by all types of replication, with an additional feature allowance to start and stop each agent and job.

This completes the set of required steps to manage the SQL Server replication process using the scripting feature in SSMS.

How it works...

SQL Server Replication Monitor is designed to monitor the replication processes and the queries that Replication Monitor uses to perform calculations and gather data that is cached and refreshed on a periodic basis. The tool can be invoked from the command line from the folder \Program Files\Microsoft SQL Server\100\Tools\Binn\sqlmonitor.exe.

The main Replication Monitor window (including all tabs), automatically refreshes by default every five seconds. Automatic refreshes do not force a refresh of the cache. The user interface displays the most recent version of the data from the cache. You can customize the refresh rate used for all windows associated with a Publisher by editing the Publisher settings. You can also disable automatic refreshes for a Publisher.

Cache refresh is handled by an SQL Server Agent job and the Replication monitoring refresher for distribution. The job runs continuously, but the cache refresh schedule is based on waiting for a certain amount of time after the previous refresh. The Replication Monitor detail windows are not automatically refreshed by default, with the exception of windows related to the subscriptions that are synchronizing. By default, all the windows in Replication Monitor can be refreshed by using the F5 function key, or by right-clicking a node in the Replication Monitor tree and clicking Refresh. Manual refreshes force a refresh of the cache.

        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

Implementing security for SQL Server Agent jobs management

The integrity of the data is only as good as your ability to secure the data platform. Typically, security doesn't stop in authentication and authorization to access the SQL Server. The additional process of encrypting data and providing a secured backup system is essential. Similarly, scheduling such data management activities is carried over using SQL Server Agent service. An SQL Agent job is a complex database object with many details involved in the definition. There are even more details to consider while a job is actually running, but it's not convenient to examine the details for SQL Agent jobs using SSMS and the filtering is less than ideal.

In this recipe, we will be implementing security for SQL Server Agent Jobs management by using the credentials and proxy methods. A credential is a record that contains the authentication information required to connect to a resource outside of SQL Server. A proxy is really just the mapping of an SQL Server credential to a specific type of SQL Server Agent operation. The SQL Server Agent service manages many different objects on the server, each one responsible for its own function in the larger automation infrastructure. Mastering these objects is critical to effectively implement an automation plan.

The key aspect of securing the SQL Server Agent service is proxies and assigning a login to start the agent service. Although most of the processing that an SQL Server Agent automation structure will perform will be directed to SQL Server objects, a situation might arise in which actions require access to resources that are outside of SQL Server at the operating system level. In this case, authentication and permissions can be a problem. As the agent is an important service that performs the necessary tasks for automated administration, you must also restrict permissions to the minimum to perform the actions.

Getting ready

The SQL Server Agent offers the security context for a job through a proxy without having elevated privileges for the users, not all SQL Server Agent jobs require a proxy. Each proxy corresponds to a security credential that can be associated with a set of subsystems and a set of logins. The proxy can be used only for job steps that use a subsystem associated with the proxy. To create a job step that uses a specific proxy, the job owner must either use a login associated with that proxy or be a member of a role with unrestricted access to proxies.

This process occurs in two steps:

  • Firstly, you must create the credential at the server level
  • Secondly, you must map that credential to a proxy, which specifies the contexts and operation types with which you are allowed to use the credential

This will provide a trusted authentication identity for the tasks within SQL Server as well as a Windows identity for tasks outside SQL Server. Since this is a security object, you will create the credential from the security section in the Object Explorer of the SSMS.

How to do it...

To begin implementing security for SQL Server Agent Jobs, let us go through the required steps to create a credential as follows:

  1. Open the SQL Server Management Studio and connect to the SQL Server instance.
  2. Locate the Security node in the Object Explorer. Expand the node and locate the Credentials node inside.
  3. Right-click the Credentials node and select New Credential from the menu. Provide a name for your credential in the CredentialName text box.
  4. Enter the name of the Windows account that you wish to map in the Identity text box.
  5. Enter and confirm the password of the Windows account (as shown in the next screenshot). The system will use this password when it attempts to authenticate under this identity.

  6. Click OK to accept the settings.
  7. We should see the new credential in the list in the Object Explorer.
  8. The steps we followed using GUI tool can be accomplished by using the following TSQL statement:

    --Implementing SQL Server Agent security
    --Creating a SQLAgent credential

  9. Locate the SQL Server Agent node in the Object Explorer. Expand the node and locate the Proxies node.
  10. If you expand the Proxies node, you will see a list of SQL Server Agent task types. Any proxy that you have configured for a specific task type will appear in the node list associated with that task.
  11. Right-click the Proxies node and select New Proxy from the menu to open the New Proxy Account dialog.
  12. Enter a name for the proxy in the Proxy Name text box.
  13. Now enter the name of the previously created credential in the Credential Name text box.
  14. Select the tasks (refer to the following screenshot) for which you want this proxy to authenticate. If you click the Subsystem check box, all tasks will get selected.

  15. Click the OK button to accept the settings.
  16. For this recipe, we have chosen only 3 task types: Replication Distributor, SQL Server Integration Package, and PowerShell.

We now have successfully created the credential and proxy account to implement security for SQL Server Agent Jobs management using the SSMS tool.

How it works...

Proxy provides the SQL Server Agent service with access to the security credentials for a Windows user without having a need to assign elevated privileges for the users. A job step that uses a proxy can access the specified job subsystems (as we assigned previously) by using the security context.

SQL Server Agent checks the subsystem access for a proxy and gives access to the proxy each time the job step runs. If the proxy no longer has access to the subsystem, the job step fails. The end result of the job can be obtained by referring to the corresponding job details, right-click on job | view history. Similarly, we can query sysJobHistory table, which contains information about the execution of scheduled jobs by SQL Server Agent. Otherwise, SQL Server Agent impersonates the user that is specified in the proxy and runs the job step. However, when you have a job step that executes TSQL, then it will not use SQL Server Agent proxies, instead TSQL job steps run in the security context of the owner of the job. To avoid any errors or interruption to that job step due to the job owner permissions, you can set the security context for that TSQL job step using sp_add_jobstep stored procedure by the passing value to database_user_name parameter.

The user specified within the credentials and proxy creation must have permission to connect to that instance of SQL Server. A user must have access to a proxy to use the proxy in a job step. Access can be granted to three types of security principals: SQL Server logins, Server roles, and Roles within the MSDB database. Additionally, if a new job is specified in the credentials, then that user must have Log on as a batchjob Windows security permission on the computer on which SQL Server is running.

There's more...

In case your SQL Server instance is upgraded from version 2000 to 2008, or 2008 R2, during the upgrade process, all the user proxy accounts that existed on SQL Server 2000 will be changed to the temporary global proxy account UpgradedProxyAccount. This temporary global proxy account will have access to those subsystems that were explicitly used, and will not have access to all subsystems after upgrading.

In order to use SQL Server Agent service, the users must be a member of one or more of msdb fixed database roles: SQLAgentUserRole, SQLAgentReaderRole, or SQLAgentOperatorRole. These additional roles are stored in the msdb system database. Users require membership in these roles and it must be granted explicitly. Users who are members of the sysadmin fixed server role have full access to SQL Server Agent, and do not need to be a member of these fixed database roles to use SQL Server Agent. If a user is not a member of one of these database roles, or of the sysadmin role, the SQL Server Agent node is not available to them when they connect to SQL Server by using SQL Server Management Studio.


This article covered the SSMS features that are new and enhanced in SQL Server 2008 R2 to enable operational efficiency in developing the joint best practices and integrated solutions. In the next article we will cover an important topic, Multiserver Management Using Utility Explorer.

Further resources on this subject:

You've been reading an excerpt of:

Microsoft SQL Server 2008 R2 Administration Cookbook

Explore Title