The individual stages for the different layers are shown in the following diagram:
The DB2 database layer
The first layer is the DB2 database layer, which involves the following tasks:
- For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging.
- We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed.
Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables.
Database/table/column name compatibility
In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition.
Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of:
- Primary key
- Unique contraint
- Unique index
If none of these exist, then Q Apply will apply the updates using all columns.
However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram:
The WebSphere MQ layer
This is the second layer we should install and test—if this layer does not work then Q replication will not work!
We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code.
If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group.
Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication:
The following figure shows the MQ objects that need to be created for bidirectional replication:
There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels.
Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer.
The Q replication layer
This is the third and final layer, which comprises the following steps:
- Create the replication control tables on the source and target servers.
- Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use.
Some of the terms that are covered in this section are:
- Logical table
- Replication Queue Map
- Q subscription
- Subscription group (SUBGROUP)
What is a logical table?
In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC:
What is a Replication/Publication Queue Map?
The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map.
Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ.
The SENDQ is the queue that Q Capture uses to send source data and informational messages.
The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s).
The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture.
So using the queues in the first “Queues” figure, the Replication Queue Map definition would be:
- Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source
- Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target
- Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target
Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue.
What is a Q subscription?
The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination.
Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed.
The structure of a Q subscription is made up of a source and target section, and we have to specify:
- The Replication Queue Map
- The source and target table
- The type of target table
- The type of conflict detection and action to be used
- The type of initial load, if any, should be performed
If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot.
Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity.
In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence.
In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3:
What is a subscription group?
A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command.
Q subscription activation
In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
The relationship between the components
The following diagram shows the relationship between source/target tables, Replication Queue Maps (RQMs), Publishing Queue Maps (PQMs), and Q subscriptions:
Here are some questions and answers about the Q replication components:
Can two separate Q Captures write to the same Send Queue? No.
Can two Q Subscriptions share a RQM? Yes.
Can we use the same Send Queue for XML publications and replication? No.
Can two RQMs share the same Receive Queue and Send Queue? No.
Can two RQMs share the same Administration Queue? Yes.
The Q Capture and Q Apply programs
It is the Q Capture and Q Apply programs, which form the heart of Q replication as it is these two programs, which read transactions from the source system and apply them to the target table.
In this section we will examine at a deeper level how these programs work and communicate with each other.
Q Capture internals
Let’s review what Q Capture does. Essentially, Q Capture reads transactions for tables that it is interested in from the DB2 log by its transaction thread calling the DB2 log interface API db2ReadLog. It builds complete transactions in memory until it detects a commit or rollback statement in the log. If it detects a rollback statement, then the transaction is flushed from memory. If it detects a commit statement, then Q Capture places the transaction in compressed XML format onto a WebSphere MQ queue called a Send Queue. If the transaction is large, then Q Capture will break the transaction up into smaller chunks before putting them onto the Send Queue.
Once Q Capture puts a transaction onto the Send Queue, it records the fact in its Restart Queue, so that if Q Capture is stopped (meaning that any in flight transactions in memory will be lost), and then restarted, Q Capture knows the log sequence number (LSN) of the last record it had put onto the Send Queue, and will request the log information from that point onwards.
The work that we have just described that Q Capture does is performed by Q Capture threads. Q Capture consists of the following threads:
- Administration: This thread handles control messages that are put by Q Apply or a user application on the Administration Queue, and is also used for error logging and monitoring.
- Hold1: This thread prevents two Q Captures with the same schema from running on a server, and handles signals sent to Q Capture.
- Prune: This thread deletes old data from some of the Q Capture control tables.
- Transaction: This thread reads the DB2 recovery log, captures changes for subscribed tables, and rebuilds log records into transactions in memory before passing them to the worker thread. For Oracle sources, the transaction thread starts the Oracle LogMiner utility, reads from the V$LOGMNR_CONTENTS view to find changes for subscribed tables, and stops LogMiner.
- Worker: This thread receives completed transactions from the Transaction thread, turns transactions into WebSphere MQ messages, and puts the messages onto Send Queues.
Administration, Prune, and Worker threads are typically in running or resting states. Holdl threads are typically in a waiting state. If the worker thread is in a running state but data is not being captured, check the IBMQREP_CAPTRACE table for messages and possible reasons.
The state of these threads can be any of the following:
The thread exists but cannot start.
The thread is initialized but cannot work.
The thread is sleeping and will wake up when there is work to do.
The thread is actively processing.
The thread started but cannot initialize. Investigate potential system resource problems, such as too many threads or not enough processing power.
The thread is not running. Check for messages in the IBMQREP_CAPTRACE, IBMQREP_APPLYTRACE, or IBMSNAP_MONTRACE control tables.
The reading of the DB2 log and the gathering of committed transactions is asynchronous to the rest of Q Capture processing and is performed by the transaction thread.
Let’s quickly look at Q Capture memory usage. There are three main areas of Q Capture memory usage:
- To store information about what source data we want to replicate or publish.
- To store information about Q subscriptions or Q publications. Each of these consumes a maximum of 1000 bytes of memory.
- To build the transactions from the DB2 logs until a commit or rollback is encountered. Clearly, if we have large transactions with few commit/ rollback points, then Q Capture will require a large amount of memory to handle this. If Q Capture runs out of assigned memory, it will spill part of the transaction data to a file.
Once Q Capture puts a transaction onto the Send Queue, its processing is complete. We now turn our attention to Q Apply processing.
Q Apply internals
For each Send Queue that Q Capture puts messages onto, there is a corresponding and connected Receive Queue that Q Apply reads from, and it starts a browser thread for each Receive Queue. This browser thread launches one or more agents to process the transactions on the Receive Queue. These agents try and work in parallel, to maximize throughput. Note that if transactions affect the same rows in the same table, then they will always be handled in order by a single agent. In addition to that, transactions affecting referential integrity between tables are also processed by a single agent.
The number of agents available to Q Apply is not determined by a Q Apply parameter, but is set for each Replication Queue Map using the num_apply_agents parameter. A value of higher than 1 for num_apply_agents allows Q Apply to process transactions in parallel.
The various Q Apply threads are as follows:
- Agent: This thread rebuilds transactions in memory and applies them to targets. We set the number of agent threads that will be used for parallel processing of transactions when we create a Replication Queue Map.
- Browser: This thread reads transaction messages from a Receive Queue, maintains dependencies between transactions, and launches one or more agent threads. Q Apply launches one browser thread for each Receive Queue.
- Housekeeping: This thread maintains the Q Apply control tables by saving and deleting data.
- Monitor: This thread logs information about Q Apply’s performance into the IBMQREP_APPLYMON control table.
- Spill agent: This thread rebuilds transactions that were held in a Spill Queue and applies them to targets. Spill agents terminate after the Spill Queue is emptied and the Q subscription becomes active.
Agent, Browser, and Housekeeping threads are typically in a running state. Check the IBMQREP_APPLYTRACE table if agent threads are in a running state but data is not being applied.
The state of these threads is similar to the description for Q Capture in the previous section.
And let’s look at memory requirements for Q Apply. As with Q Capture, Q Apply consumes a maximum of 1000 bytes worth of memory for each active Q subscription. The other major area of memory usage is when Q Apply rebuilds the transactions from the Receive Queues before applying them to the target tables.
How do Q Capture and Q Apply communicate?
In the previous sections, we talked about how Q Capture and Q Apply put messages onto and read from various WebSphere MQ queues. Q Capture and Q Apply need to be able to communicate with each other, for example to exchange information on which records have been processed. This process is shown in the following diagram:
Q Capture communicates with Q Apply by putting messages onto its Send Queue (which Q Apply sees as a Receive Queue). Q Apply communicates back to Q Capture using its Administration Queue.
There are two types of message sent: Data messages and Informational messages. Let’s discuss these in more detail.
- Data messages: There are three types of data messages, and these messages contain the data/operation:
- Large object (LOB): This contains some or all of the data from a LOB value in the source table. LOB messages are sent separately from the transaction messages and row operation messages that the LOB values belong to if they are not inlined.
- Row operation: This contains a single insert, delete, or update operation to a source table. It also contains commit information about the database transaction that this row is part of.
- Transaction: This contains one or more insert, delete, or update operations to a source table. These operations belong to the same database transaction. It also contains commit information for the transaction.
- Informational messages: There are six informational messages and they describe the action being transmitted:
- Add column: This contains information about a column that was added to an existing subscription.
- Error report: This tells the user application that Q Capture encountered a publication error.
- Heartbeat: This tells the user application that Q Capture is still running when it has no data messages to send.
- Load done received: This acknowledges that Q Capture received the message that the target table is loaded.
- Subscription deactivated: This tells the user application that Q Capture deactivated a subscription.
- Subscription schema: This contains information about the source table and its columns. It also contains data-sending options, Send Queue name, and information about Q Capture and the source database.
- XML control messages: There are four control messages which provide information to Q Capture:
- Activate subscription: This requests that Q Capture activates a subscription.
- Deactivate subscription: This requests that Q Capture deactivates a subscription.
- Invalidate Send Queue: This requests that Q Capture invalidates a Send Queue by performing the queue error action that was specified.
- Load done: This tells Q Capture that the target table for a subscription is loaded.
In this article, we looked at the DB2 database layer, the WebSphere MQ layer, and the Q replication layer that make up a Q replication solution. We introduced the terms Replication/Publication Queue Map, Q subscription, and subscription group and showed their relationship with each other. We then moved on to look at the internals of the Q Capture and Q Apply programs and finished with how they communicate.