IBM DB2 9.7 Advanced Database Administration Cookbook

by Adrian Neagu Robert Pelletier | March 2012 | Beginner's Guides IBM

In this article by Adrian Neagu and Robert Pelletier, we will cover:

  • Setting up HADR by using the command line
  • Setting up HADR by using Control Center

Introduction

IBM DB2 comes and integrates a multitude of high-availability solutions, that can employ and increase the availability of databases. There are software high availability solutions, such as SQL Replication, Q Replication, HADR, DB2 ACS, and IBM TSA, and hardware-based solutions, such as IBM PPRC, HACMP, FlashCopy, or hybrid such as the new clustering solution provided by DB2 pureScale technology, covered in Chapter 14, IBM pureScale Technology and DB2. Obviously, we can choose to implement one solution or use a virtually unlimited number of combinations to ensure that we are highly protected from any type of disaster. In the following recipes, we will cover how to set up and DB2 fault monitor as high availability solutions as a high availability solution.

HADR is a high availability software solution provided by IBM for DB2 database protection in the case of disaster or critical database failure. HADR is an abbreviation for High Availability Disaster Recovery. The technology itself can be classified as a replication solution. Basically, this technology replicates data by sending and replaying logs from a source database to a destination database. The source database, by convention, is called the primary database; the destination database is called the standby database

Some important benefits of HADR:

  • Transparent takeover (switchover) or takeover by force (failover) for clients connected
  • Automatic client reroute capabilities
  • It is a very fast method in terms of recoverability
  • It has a negligible impact on transaction performance
  • The cost is low compared with a hardware replication solution

Some restrictions with using HADR:

  • Backup operations are not supported on standby databases/li>
  • HADR cannot be implemented with multipartitioned databases
  • The primary and standby database must be run on the same operating system (same bit size) and the same version of the DB2 database system
  • The system clock must be synchronized on both primary and standby servers

Operations replicated using HADR:

  • Data definition language (DDL)
  • Data manipulation language (DML)
  • Buffer pool operations
  • Table space operations
  • Online reorganization
  • Offline reorganization
  • Metadata for stored procedures and user-defined functions

Operations that do not replicate using HADR:

  • Tables created with the NOT LOGGED INITIALLY option
  • Non-logged LOB columns are not replicated
  • Updates to database configuration
  • Database configuration and database manager configuration parameters
  • Objects external to the database-related objects and library files
  • The recovery history file (db2rhist.asc) and changes made to it

Setting up HADR by using the command line

Setting up HADR is straightforward. You can use a variety of methods to set up HADR, using the command line or Control Center, and IBM Optim Database Administrator HADR setup wizards. In the following recipe, we will set up HADR using the command line.

Getting ready

In this recipe, nodedb21 will be used for the initial primary database, and nodedb22 for the initial standby database. We use the term initially, because in the following recipes, we will initiate takeover and takeover by force operations, and the databases will exchange and change their roles. All operations will be conducted on the non-partitioned NAV database, under instance db2inst1 on nodedb21, and db2inst1 on nodedb22.

How to do it...

To set up a HADR configuration, we will use the following steps:

  • Install IBM DB2 9.7 ESE in location /opt/ibm/db2/ V9.7_01, on nodedb22
  • Creating additional directories for log archiving, backup, and mirror log locations, on both nodes
  • Setting proper permissions on new directories
  • Configuring log archiving and log mirroring
  • Configuring the LOGINDEXBUILD and INDEXREC parameters
  • Backing up the primary database
  • Copying primary database backup to nodedb22
  • Setting up HADR communication ports
  • Configuring HADR parameters on both databases
  • Initiating HADR on the standby database
  • Initiating HADR on the primary database

Install IBM DB2 ESE on nodedb22

Install IBM DB2 9.7 Enterprise Server Edition in location /opt/ibm/db2/V9.7_01, on nodedb22; create users db2inst1 and db2fenc1, and instance db2inst1, during installation.

Creating additional directories for table space containers, archive logs, backup, and mirror logs

  1. Create one directory for table space containers of the NAV application on nodedb22:

    [root@nodedb22 ~]# mkdir -p /data/db2/db2inst1/nav
     [root@nodedb22 ~]#

  2. Create directories for the archive logs location on both servers:

    [root@nodedb22 ~]# mkdir -p /data/db2/db2inst1/logarchives
     [root@nodedb22 ~]#
     [root@nodedb21 ~]# mkdir -p /data/db2/db2inst1/logarchives
     [root@nodedb22 ~]#

  3. Create directories for the database backup location on both servers:

    [root@nodedb21 ~]# mkdir -p /data/db2/db2inst1/backup
     [root@nodedb21 ~]#
     [root@nodedb21 ~]# mkdir -p /data/db2/db2inst1/backup
     [root@nodedb21 ~]#

  4. Create directories for the mirror log location on both servers:

    [root@nodedb21 ~]# mkdir -p /data/db2/db2inst1/mirrorlogs
     [root@nodedb21 ~]#
     [root@nodedb22 ~]# mkdir -p /data/db2/db2inst1/mirrorlogs
     [root@nodedb22~]#

  5. This is just an example; usually, the mirror logs should be stored in a safe location. If it is possible use an NFS mount exported from another server.

Setting permissions on the new directories

  1. Set db2inst1 as owner for the directories where we will configure archive log and log mirrors, and restore the NAV application table space containers on both servers:

    [root@nodedb21 ~]# chown –R db2inst1:db2iadm1 /data/db2/db2inst1
     [root@nodedb21 ~]#
     [root@nodedb22 ~]# chown –R db2inst1:db2iadm1 /data/db2/db2inst1
     [root@nodedb22 ~]#

Configuring archive log and mirror log locations

  1. Connect to the NAV database:

    [db2inst1@nodedb21 ~]$ db2 "CONNECT TO NAV"
     
        Database Connection Information
     
      Database server        = DB2/LINUXX8664 9.7.4
      SQL authorization ID   = DB2INST1
      Local database alias   = NAV

  2. Quiesce the NAV database:

    [db2inst1@nodedb21 ~]$ db2 "QUIESCE DATABASE IMMEDIATE"
     DB20000I  The QUIESCE DATABASE command completed successfully.
     [db2inst1@nodedb21 ~]$

  3. Set the log archive location:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DB CFG FOR NAV USING
       logarchmeth1 'DISK:/data/db2/db2inst1/logarchives'"
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

  4. Set the number of primary logs; usually, in a HADR configuration, it should be set to a greater value than in a normal database:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DB CFG FOR NAV USING
       LOGPRIMARY 20"
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

  5. Set the number of secondary logs:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DB CFG FOR NAV USING
       LOGSECOND 5"
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

  6. Set log file size; it is also recommended to be bigger than in a normal database:

    [db2inst1@nodedb21 ~]$  db2 "UPDATE DB CFG FOR NAV USING
       LOGFILSIZ 2048 "
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21

  7. Set a mirror log location in the case where the primary log's host fails (these logs will be needed for replaying on the standby database):

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION USING
       MIRRORLOGPATH /data/db2/db2inst1/mirrorlogfiles"
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

  8. Set log buffer size. Use a log buffer size on both the primary and standby databases, bigger than in a normal database, to overcome log buffer full events:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DB CFG FOR NAV USING
       LOGBUFSZ 1024 "
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     db2inst1@nodedb21 ~]$

  9. Set log buffer size. Use a log buffer size on both the primary and standby databases, bigger than in a normal database, to overcome log buffer full events:

    [db2inst1@nodedb21 ~]$  db2 "UNQUIESCE DATABASE"
     DB20000I  The UNQUIESCE DATABASE command completed successfully.
     [db2inst1@nodedb21 ~]$ 

  10. Unquiesce the NAV database:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
       USING LOGINDEXBUILD ON "
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

    LOGBUFSZ should be correlated with network tuning; try to set TCP tunables (receive and send buffer) to appropriate values.

Configuring LOGINDEXBUILD and INDEXREC parameters

  1. The LOGINDEXBUILD parameter specifies if operations as create index, rebuild index, and reorganize table generates log when it has a value of ON or not if it is OFF, rebuild or table reorganization; usually, this parameter in a HADR configuration should be configured to ON. If you plan to use the standby database for reporting, then it is mandatory to set the parameter to ON.
  2. If it is set to OFF, then there is not enough log information for building the indexes on the standby database. Therefore, the new indexes created or rebuilt in a HADR configuration are marked as invalid on the standby database.
  3. In case you have slow network bandwidth, you can set it to OFF, but the amount of time needed to activate the standby database will increase considerably, because the invalid indexes have to be rebuilt. You can also control index logging at table level by setting the table option LOG INDEX BUILD to ON or OFF.

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
       USING INDEXREC RESTART"
     DB20000I  The UPDATE DATABASE CONFIGURATION command completed
       successfully.
     [db2inst1@nodedb21 ~]$

  4. In case you have slow network bandwidth, you can set it to OFF, but the amount of time needed to activate the standby database will increase considerably, because the invalid indexes have to be rebuilt. You can also control index logging at table level by setting the table option LOG INDEX BUILD to ON or OFF.

    [db2inst1@nodedb21 ~]$ db2 terminate
     DB20000I  The TERMINATE command completed successfully.
     [db2inst1@nodedb21 ~]$ db2 "BACKUP DATABASE NAV TO "/data/backup"
       COMPRESS"
     Backup successful. The timestamp for this backup image is :
       20110707150659
     [db2inst1@nodedb21 ~]$

  5. The parameter INDEXREC control s the rebuild of invalid indexes on database startup. In HADR configurations, it should be set to RESTART, on both databases.

    [db2inst1@nodedb21 ~]$ scp /data/db2/db2inst1/backup/
     NAV.0.db2inst1.NODE0000.CATN0000.
       20110707150659.001 nodedb22:/data/db2/db2inst1/backup
     db2inst1@nodedb22's password:
     NAV.0.db2inst1.NODE0000.CATN0000.20110707150659.001
     [db2inst1@nodedb21 ~]$

Backing up the primary database

  1. Back up the database with the compress option, to save space; it is useful to compress the backup piece, especially when you have a very large database:

    [db2inst1@nodedb22 ~]$ db2 "RESTORE DATABASE NAV FROM /data/db2/
     db2inst1/backup TAKEN AT 20110707150659 REPLACE
       HISTORY FILE"
     DB20000I  The RESTORE DATABASE command completed successfully.
     [db2inst1@nodedb22 ~]$

Copying the database backup to nodedb22

  1. Copy the database backup to location /data/db2/db2inst1/backup on nodedb22:

    DB2_HADR_NAV1      55006/tcp
     DB2_HADR_NAV2      55007/tcp

Restoring the database NAV on nodedb22

  1. Restore the database on the standby location:

    [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
    FOR NAV USING HADR_LOCAL_HOST nodedb21"
    DB20000I  The UPDATE DATABASE CONFIGURATION command
    completed successfully.
    [db2inst1@nodedb21 ~]$

Setting up HADR communication ports

  1. Add the following two entries to /etc/services, on both locations:

    DB2_HADR_NAV1      55006/tcp
     DB2_HADR_NAV2      55007/tcp

(For more resources on IBM DB2 9.7, see here.)

Setting up HADR parameters on the primary database

  1. To specify the active hosts , we have to configure the following parameters:
    • HADR_LOCAL_HOST – Specifies the local host; this parameter can either have the IP address or the hostname as values.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_REMOTE_HOST nodedb22"
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.;
      [db2inst1@nodedb21 ~]$

    • HADR_REMOTE_HOST – Specifies the remote host; this parameter can either have the IP address or the hostname as values. On the primary database, it specifies the standby database host, and on the standby database, it specifies the primary database host.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_LOCAL_SVC DB2_HADR_NAV1 "
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.
      [db2inst1@nodedb21 ~]

  2. To specify the communication services , we have to configure the following parameters:
    • HADR_LOCAL_SVC – Specifies the local HADR service name or port.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_REMOTE_SVC DB2_HADR_NAV2 DEFERRED"
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.
       [db2inst1@nodedb21 ~]$

    • HADR_REMOTE_SVC – Specifies the remote HADR service name or port.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_REMOTE_INST db2inst1 "
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.
      [db2inst1@nodedb21 ~]$

  3. Specify the remote instance, on the primary it has as value the instance of the standby database:
    • HADR_REMOTE_INST – Used to specify the remote instance.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_SYNCMODE ASYNC DEFERRED"
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.
      [db2inst1@nodedb21 ~]$

  4. To specify the synchronization mode , we have to configure the following parameter:
    • HADR_SYNCMODE – Determines how the primary log writes are synchronized with the standby database. It can have the following values: SYNC, NEARASYNC, and SYNC.

      [db2inst1@nodedb21 ~]$ db2 "UPDATE DATABASE CONFIGURATION
      FOR NAV
        USING HADR_SYNCMODE ASYNC DEFERRED"
      DB20000I  The UPDATE DATABASE CONFIGURATION command
      completed
        successfully.
      [db2inst1@nodedb21 ~]$

    • Restart the database to activate the HADR parameters:

      [db2inst1@nodedb21 ~]$ db2 "DEACTIVATE DATABASE NAV"
      SQL1496W  Deactivate database is successful, but the database was
        not activated.
      [db2inst1@nodedb21 ~]$ db2 "ACTIVATE DATABASE NAV"
      DB20000I  The ACTIVATE DATABASE command completed successfully.
      [db2inst1@nodedb21 ~]$

Setting up HADR parameters on the standby database

  1. Set the local and remote hosts:

    db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_LOCAL_HOST nodedb22 "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.
    [db2inst1@nodedb22 ~]$
    [db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_REMOTE_HOST nodedb21 "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.
    [db2inst1@nodedb22 ~]$

  2. Set the local and remote communication services:

    [db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_LOCAL_SVC DB2_HADR_NAV2 "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.
    [db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_REMOTE_SVC DB2_HADR_NAV1 "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.

  3. Set the synchronization mode, identically:

    [db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_SYNCMODE ASYNC "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.
    [db2inst1@nodedb22 ~]$

  4. Set the remote instance, in this case, the instance of the primary database:

    [db2inst1@nodedb22 ~]$ db2 "UPDATE DATABASE CONFIGURATION FOR NAV
      USING HADR_REMOTE_INST db2inst1 "
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed
      successfully.[db2inst1@nodedb22 ~]$

Starting HADR on standby database

  1. The first step to activate HADR is to start HADR on the standby database:

    [db2inst1@nodedb22 ~]$ db2 "START HADR ON DATABASE NAV AS STANDBY"
    DB20000I  The START HADR ON DATABASE command completed
      successfully.
    [db2inst1@nodedb22 ~]$

  2. Activate the database, if necessary:

    db2inst1@nodedb22 ~]$ db2 "ACTIVATE DATABASE NAV"
    DB20000I  The ACTIVATE DATABASE command completed successfully.
    [db2inst1@nodedb22 ~]$

  3. The parameter hadr_role should now change its value to STANDBY:

    [db2inst1@nodedb22 ~]$ db2 "GET DB CFG FOR NAV" | grep "HADR
      database role"
     HADR database role                               = STANDBY
    [db2inst1@nodedb22 ~]$

Starting HADR on primary database

  1. Activate HADR on the primary database:

    [db2inst1@nodedb21 ~]$ db2 "START HADR ON DATABASE NAV AS PRIMARY"
    DB20000I  The START HADR ON DATABASE command completed
      successfully.
    [db2inst1@nodedb21 ~]$

  2. The parameter hadr_role will have the value changed to PRIMARY:

    [db2inst1@nodedb21 ~]$ db2 "GET DB CFG FOR NAV" | grep "HADR
      database role"
     HADR database role                                = PRIMARY
    [db2inst1@nodedb21 ~]$

Monitoring HADR

  1. To monitor the status of HADR, you can use the db2pd command:

    [db2inst1@nodedb21 ~]$ db2pd –d NAV –hadr

How it works…

The mechanism used by HADR to protect against data loss consists of transmitting data changes continually from the primary database to the standby database. Actually, the primary database sends the contents of its log buffer to be replayed on standby. The transmission is realized between two special edu processes; on the primary, db2hadrp is used, and on standby, db2hadrs.

The health and integrity of HADR is monitored continuously by a mechanism named heartbeats, in which the primary database and standby database send messages to each other or ping each other from time to time, or to be more precise, a message occurs every quarter of the hard_timeout value , which is specified in seconds.

HADR tries all the time to keep the standby database synchronized; with the primary database. In HADR, there are three levels of synchronization implemented that are explained in more detail in the recipes that follow: ASYNC, NEARSYNC, and SYNC.

There could be situations when the databases are not synchronized, for example, we could experience a network failure or another unpredicted event, which can influence the log record transmission. To resolve log desynchronizations, HADR uses a mechanism named log catchup. It is based on reading the log records that are not yet applied on the standby database, from archive log files generated on the primary database. This is the first reason why the primary database has to be in archive logs mode. The log replay process on the standby database is very similar to a database recovery (the standby database is in continuous rollforward pending state).

If the databases are synchronized, then they are in peer state.

The database roles are interchangeable: the primary may become standby, and vice versa, in the case of initiating a takeover operation. In the case that the primary database is no longer available, we should initiate a takeover by force or a failover operation. In this case, we can restore the database from the standby site or reinitialize the former primary, if it is possible.

There's more…

Before you plan to set up a HADR configuration, you have to meet some requirements regarding table spaces:

  • Table space type must be the same on both servers
  • Table space container paths must be the same on both servers
  • Table space container sizes must be the same
  • Table space container types (raw or ?le) must be the same

The hadr_timeout and hadr_peer_window database configuration parameters

There are two more important parameters which can influence the behavior and response of HADR connection failure:

  • HADR_TIMEOUT – Specified in seconds, the amount of time to wait before HADR considers the communication lost between database pairs. When HADR detects a network disconnect, all transactions running on primary will hang for the amount of time specified by this parameter.
  • HADR_PEER_WINDOW – Specified in seconds, the amount of time in which the primary database suspends a transaction after the database pairs have entered disconnect state.

Set these in conformance with your internal MTTR.

See also

Chapter 9, Problem Determination, Event Sources, and Files, for monitoring HADR; you can also use the db2top or GET SNAPSHOT ON DATABASE NAV commands.

Setting up HADR by using Control Center

In this recipe, we will cover how to set up HADR using the wizard provided by Control Center.

Getting ready

In this recipe, we will set up the same HADR configuration build-up, using the command line. But, there are some differences; we will skip the configuration of archive log and mirror logging and focus just on the HADR part itself.

  • Install IBM DB2 ESE according to the Install IBM DB2 ESE on nodedb22 subsection under the Setting up HADR by using the command line recipe.
  • Create directories according to the Creating additional directories for table space containers, archive log, backup, and mirror logs subsection under the Setting up HADR by using the command line recipe.
  • Set permission on directories according to the Setting proper permissions on the new directories subsection under the Setting up HADR by using the command line recipe.
  • Configure archive log and log mirroring according to the Configuration log archiving and mirror logging subsection under the Setting up HADR by using the command line recipe.
  • Catalog the admin nodedb22 node (you can catalog the admin nodes using the setup provided by the wizard at step 3):

    [db2inst1@nodedb21 ~]$ db2 "CATALOG ADMIN TCPIP NODE NODEDB22
      REMOTE nodedb21 SYSTEM  nodedb22 OSTYPE  LINUXX8664"
    DB20000I  The CATALOG ADMIN TCPIP NODE command completed
      successfully.
    DB21056W  Directory changes may not be effective until the
      directory cache is
    refreshed.[db2inst1@nodedb21 ~]$

  • Catalog instance db2inst1 under the admin node:

    [db2inst1@nodedb21 ~]$ db2 "CATALOG TCPIP NODE node22 REMOTE
      NODEDB22 SERVER 50001 REMOTE_INSTANCE  db2inst1 SYSTEM  NODEDB22
      OSTYPE  LINUXX8664"DB20000I  The CATALOG TCPIP NODE command
      completed successfully.
    DB21056W  Directory changes may not be effective until the
      directory cache is Refreshed
    [db2inst1@nodedb21 ~]$

How to do it...

  1. Right-click on the database NAV and choose High Availability Disaster Recovery | Set Up….

  2. Skip the introduction; next, we can see that our database is in archive log mode and enabled for log shipping to a standby location:

  3. In the next step, we are asked to identify the standby database:

  4. Next, we'll do a full database backup, which will be used for restoring NAV database on nodedb22:

  5. Choose /data/db2/db2inst1/backup as the backup location:

  6. Next, select the backup made previously. It is recommended to choose the most recent backup; in this way, fewer log archives need to be replayed on the standby database.

  7. Select the same location for nodedb22, for the database backup location:

  8. Set the hostname, HADR service name, and HADR port number. You can choose new values or use existing ones. The service name and port number will be added to /etc/services, on both hosts:

  9. Set the automatic client reroute; for NAV on nodedb21, the alternative will be NAVSTDBY from host nodedb22, and vice versa:

  10. Set the synchronization mode to Asynchronous. We'll delve into more detail about synchronization modes and related parameters, later.

  11. Next, on the summary screen, you can review the steps and the command lines used to implement HADR:

  12. Click on Finish, to start the implementation of HADR; next, we'll see a progress box showing the steps as follows:

  13. If the setup was successful, we should now be able to manage and monitor our HADR configuration. Right-click on the NAV database

How it works...

The main difference between using the Control Center and the command line is that you need to additionally catalog admin nodes on both sides.

You can also use the HADR wizard provided by IBM Optim Administrator if you are familiar with this tool.

Summary

This article, mainly covers High Availability Disaster Recovery as a HA solution and DB2 Fault Monitor, which is used for monitoring and ensuring the availability of instances that might be closed by unexpected events, such as bugs or other type of malfunctions.


Further resources on this subject:


About the Author :


Books From Packt


WebSphere Application Server 7.0 Administration Guide
WebSphere Application Server 7.0 Administration Guide

IBM WebSphere Application Server 8.0 Administration Guide
IBM WebSphere Application Server 8.0 Administration Guide

IBM Cognos 8 Report Studio Cookbook
IBM Cognos 8 Report Studio Cookbook

IBM Lotus Notes and Domino 8.5.3: Upgrader's Guide
IBM Lotus Notes and Domino 8.5.3: Upgrader's Guide

IBM Cognos TM1 Cookbook
IBM Cognos TM1 Cookbook

IBM Lotus Notes 8.5 User Guide
IBM Lotus Notes 8.5 User Guide

Getting Started with IBM FileNet P8 Content Manager
Getting Started with IBM FileNet P8 Content Manager

IBM Sametime 8.5.2 Administration Guide
IBM Sametime 8.5.2 Administration Guide


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
F
N
u
D
r
2
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software