The following recipes will be covered in this chapter:
Installing Oracle GoldenGate in a x86_64 Linux-based environment
Installing Oracle GoldenGate in a Windows environment
Enabling supplemental logging in the source database
Supported datatypes in Oracle GoldenGate
Preparing the source database for GoldenGate setup
Preparing the target database for GoldenGate setup
Setting up a Manager process
Setting up a Classic Capture Extract process
Setting up an Integrated Capture Extract process
Setting up a Datapump process
Setting up a Replicat process
Database replication is always an interesting challenge. It requires a complex setup and strong knowledge of the underlying infrastructure, databases, and the data held in them to replicate the data efficiently without much impact on the enterprise system. Oracle GoldenGate gains a lot of its popularity from the simplicity in its setup. In this chapter we will cover the basic steps to install GoldenGate and set up various processes.
This recipe will show you how to install Oracle GoldenGate in a x86_64 Linux-based environment.
In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Linux platform. We have downloaded Oracle GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.
Tip
You can find the Oracle GoldenGate binaries for x86_64 Linux at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.
Oracle GoldenGate binaries are installed in a directory called GoldenGate Home
. This directory should be owned by the OS user (ggate
) which will be the owner of GoldenGate binaries. This user must be a member of the dba
group. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility as given in the following steps:
Log in to the server using the
ggate
account.Create a directory with this user as shown in the following command:
mkdir installation_directory
Change the directory to the location where you have copied the media pack file and unzip it. The media pack contains the
readme
files and the GoldenGate binaries file. The GoldenGate binaries file for the 64-bit x86 Linux platform is calledfbs_ggs_Linux_x64_ora11g_64bit.tar
.Extract the contents of this file into the GoldenGate
Home
directory as shown in the following command:tar –xvf fbs_ggs_Linux_x64_ora11g_64bit.tar –C installation_directory
Create GoldenGate directories as follows:
cd installation_directory ./ggsci create subdirs exit
Oracle provides GoldenGate binaries in a compressed format. In order to install the binaries you unzip the compressed file, and then expand the archive file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home
which are not created by default. These directories are created using the CREATE SUBDIRS
command. The following is the list of the subdirectories that get created with this command:
Subdirectory |
Contents |
---|---|
|
It contains parameter files |
|
It contains report files |
|
It contains checkpoint files |
|
It contains process status files |
|
It contains SQL scripts |
|
It contains database definitions |
|
It contains trail files |
|
It contains temporary files |
|
It contains output files |
In this recipe we will go through the steps that should be followed to install the GoldenGate binaries in the Windows environment.
In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Windows platform. We have downloaded GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.
Tip
You can find the Oracle GoldenGate binaries for x86_64 Windows at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.
Oracle GoldenGate binaries are installed in a directory called GoldenGate Home
. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility:
Log in to the server as the Administrator user.
Create a directory for GoldenGate
Home
.Unzip the contents of the media pack file to the GoldenGate
Home
directory.Create GoldenGate directories as shown in the following command:
cd installation_directory ggsci create subdirs exit
Oracle provides GoldenGate binaries in a compressed format. The installation involves unzipping the file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home
which are not created by default. These directories are created using the CREATE SUBDIRS
command. The following is the list of the subdirectories that get created with this command:
Subdirectory |
Contents |
---|---|
|
It contains parameter files |
|
It contains report files |
|
It contains checkpoint files |
|
It contains process status files |
|
It contains SQL scripts |
|
It contains database definitions |
|
It contains trail files |
|
It contains temporary files |
|
It contains output files |
Oracle GoldenGate replication can be used to continuously replicate the changes from the source database to the target database. GoldenGate mines the redo information generated in the source database to extract the changes. In order to update the correct rows in the target database, Oracle needs sufficient information to be able to identify them uniquely. Since it relies on the information extracted from the redo buffers, it requires extra information columns to be logged into the redo records generated in the source database. This is done by enabling supplemental logging in the source database. This recipe explains how to enable supplemental logging in the source database.
Oracle GoldenGate requires supplemental logging to be enabled at the database level and table level. Use the following steps to enable the required supplemental logging:
Enable database supplemental logging through
sqlplus
as follows:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Switch a database
LOGFILE
to bring the changes into effect:ALTER DATABASE SWITCH LOGFILE;
From the GoldenGate
Home
, log in toGGSCI
:./ggsci
Log in to the source database from
ggsci
using a user which has privileges to alter the source schema tables as shown in the following command:GGSCI> DBLOGIN USERID <USER> PASSWORD <PW>
Enable supplemental logging at the table level as follows:
GGSCI> ADD TRANDATA <SCHEMA>.<TABLE_NAME>
Repeat step 5 for all the tables that you want to replicate using GoldenGate.
Supplemental logging enables the database to add extra columns in the redo data that is required by GoldenGate to correctly identify the rows in the target database. We must enable database-level minimum supplemental logging before we can enable it at the table level. When we enable it at the table level, a supplemental log group is created for the table that consists of the columns on which supplemental logging is enabled. The columns which form a part of this group are decided based on the key constraints present on the table. These columns are decided based on the following priority order:
Primary key
First unique key alphanumerically with no nullable columns
First unique key alphanumerically with nullable columns
All columns
GoldenGate only considers unique keys which don't have any virtual columns, any user-defined types, or any function-based columns. We can also manually specify which columns we want to be a part of the supplemental log group.
Tip
You can enable supplemental logging on all tables of a schema using the following single command:
GGSCI> ADD TRANDATA <SCHEMA>.*
If possible, do create a primary key in each source and target table that is part of the replication. The pseudo key consisting of all columns, created by GoldenGate, can be quite inefficient.
There are two ways to enable supplemental logging. The first method is to enable it using GGSCI,
using the ADD
TRANDATA
command. The second method is to use sqlplus
and run the ALTER TABLE ADD SUPPLEMENTAL LOG DATA
command. The latter method is more flexible and allows a person to specify the name of the supplemental log group. However, when you use Oracle GoldenGate to add supplemental logging it creates supplemental log group names using the format, GGS_<TABLE_NAME>_<OBJECT_NUMBER>
. If the overall supplemental log group name is longer than 30 characters, GoldenGate truncates the table name as required. Oracle support recommends that we use the first method for enabling supplemental logging for objects to be replicated using Oracle GoldenGate. The GGS_*
supplemental log group format enables GoldenGate to quickly identify the supplemental log groups in the database.
If you are planning to use GoldenGate to capture all transactions in the source database and convert them into INSERT
for the target database, for example, for reporting/auditing purposes, you'll need to enable supplemental logging on all columns of the source database tables.
For information about how to replicate changes to a target database and maintain an audit record, refer to the recipe Mapping the changes to a target table and storing the transaction history in a history table in Chapter 4, Mapping and Manipulating Data
Oracle GoldenGate has some restrictions in terms of what it can replicate. With every new release, Oracle is adding new datatypes to the list of what is supported. The list of the datatypes of the objects that you are planning to replicate should be checked against the list of supported datatypes for the GoldenGate version that you are planning to install.
You should have identified the various datatypes of the objects that you plan to replicate.
The following is a high-level list of the datatypes that are supported by Oracle GoldenGate v11.2.1.0.1:
NUMBER
BINARY FLOAT
BINARY DOUBLE
CHAR
VARCHAR2
LONG
NCHAR
NVARCHAR2
RAW
LONG RAW
DATE
TIMESTAMP
CLOB
NCLOB
BLOB
SECUREFILE
andBASICFILE
XML
datatypesUser defined
/Abstract
datatypesSDO_GEOMETRY
,SDO_TOPO_GEOMETRY, and
SDO_GEORASTER
are supported
There are some additional details that one needs to consider while evaluating the supported datatypes for a GoldenGate version. For example, the user-defined datatypes are only supported if the source and target tables have the same structures. Both Classic and Integrated Capture modes support XML types which are stored as XML
, CLOB
, and XML binary
. However, XML type tables stored as Object Relational are only supported in Integrated Capture mode.
The support restrictions apply to a few other factors apart from the datatypes. Some of these are as Manipulating Data:
INSERTs
,UPDATEs
andDELETEs
are supported on regular tables, IOTs, clustered tables and materialized viewsTables created as
EXTERNAL
are not supportedExtraction from compressed tables is supported only in Integrated Capture mode
Materialized views created with
ROWID
are not supportedOracle GoldenGate supports replication of the sequences only in uni-directional mode
Oracle GoldenGate architecture consists of Extract process in the source database. This process mines the redo information and extracts the changes occurring in the source database objects. These changes are then written to the trail files. There are two types of Extract processes – Classic Capture and Integrated Capture. The Extract process requires some setup to be done in the source database. Some of the steps in the setup are different depending on the type of the Extract process. GoldenGate requires a database user to be created in the source database and various privileges to be granted to this user. This recipe explains how to set up a source database for GoldenGate replication.
You must select a database user ID for the source database setup. For example, GGATE_ADMIN
.
Run the following steps in the source database to set up the GoldenGate user as follows:
sqlplus sys/**** as sysdba CREATE USER GGATE_ADMIN identified by GGATE_ADMIN; GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN; GRANT ALTER SYSTEM TO GGATE_ADMIN; GRANT CONNECT, RESOURCE to GGATE_ADMIN; GRANT SELECT ANY DICTIONARY to GGATE_ADMIN; GRANT FLASHBACK ANY TABLE to GGATE_ADMIN; GRANT SELECT ANY TABLE TO GGATE_ADMIN; GRANT SELECT ON DBA_CLUSTERS TO GGATE_ADMIN; GRANT EXECUTE ON DBMS_FLASHBACK TO GGATE_ADMIN; GRANT SELECT ANY TRANSACTION To GGATE_ADMIN;
The following steps are only required for Integrated Capture Extract (Version 11.2.0.2 or higher):
EXEC DBMS_GoldenGate_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN'); GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;
The following steps are only required for Integrated Capture Extract (Version 11.2.0.1 or earlier):
EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN'); GRANT BECOME USER TO GGATE_ADMIN; GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;
Set up a TNS Entry for the source database in $ORACLE_HOME/network/admin/tnsnames.ora
.
The preceding commands can be used to set up the GoldenGate user in the source database. The Integrated Capture required some additional privileges as it needs to interact with the database log mining server.
You will notice that in the previous commands, we have granted SELECT ANY TABLE
to the GGATE_ADMIN
user. In production environments, where least required privileges policies are followed, it is quite unlikely that such a setup would be approved by the compliance team. In such cases, instead of granting this privilege, you can grant the SELECT
privilege on individual tables that are a part of the source replication configuration. You can use dynamic SQL to generate such commands.
In our example schema database, we can generate the commands for all tables owned by the user SCOTT
as follows:
select 'GRANT SELECT ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT' COMMAND ------------------------------------------------------------------ GRANT SELECT ON SCOTT.DEPT to GGATE_ADMIN; GRANT SELECT ON SCOTT.EMP to GGATE_ADMIN; GRANT SELECT ON SCOTT.BONUS to GGATE_ADMIN; GRANT SELECT ON SCOTT.SALGRADE to GGATE_ADMIN;
In this recipe we saw the steps required to set up a the GoldenGate user in the database. The Extract process required various privileges to be able to mine the changes from the redo data. At this stage it's worth discussing the two types of Extract processes and the differences between both.
The Classic Capture mode is the traditional Extract process that has been there for a while. In this mode, GoldenGate accesses the database redo logs (also, archive logs for older transactions) to capture the DML changes occurring on the objects specified in the configuration files. For this, at the OS level, the GoldenGate user must be a part of the same database group which owns the database redo logs. If the redo logs of the source database are stored in an ASM diskgroup this capture method reads it from there. This capture mode is available for other RDBMS as well. However, there are some datatypes that are not supported in Classic Capture mode. One of the biggest limitations of the Classic Capture mode is its inability to read data from the compressed tables/tablespaces.
In case of the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs). An LCR is a message with a specific format that describes a database change. This mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC. This feature is only available for databases on Version 11.2.0.3 or higher. This Capture mode supports extracting data from source databases using compression. It also supports various object types which were previously not supported by Classic Capture.
Integrated Capture can be configured in an online or downstream mode. In the online mode, the log miner database is configured in the source database itself. In the downstream mode, the log miner database is configured in a separate database which receives archive logs from the source database. This mode offloads the log mining load from the source database and is quite suitable for very busy production databases. If you want to use the Integrated Capture mode with a source database Version 11.2.0.2 or earlier, you must configure the Integrated Capture mode in downstream capture topology, and the downstream mining database must be on Version 11.2.0.3 or higher.
Refer to the recipe S etting up an Integrated Capture Extract process later in this chapter and Creating an Integrated Capture with a downstream database for compressed tables in Chapter 7, Advanced Administration Tasks – I
On the target side of the GoldenGate architecture, the collector processes receive the trail files shipped by the Extract/Datapump processes from the source environment. The collector process receives these files and writes them locally on the target server. For each row that gets updated in the source database, the Extract process generates a record and writes it to the trail file. The Replicat process in the target environment reads these trail files and applies the changes to the target database using native SQL calls. To be able to apply these changes to the target tables, GoldenGate requires a database user to be set up in the target database with some privileges on the target objects. The Replicat process also needs to maintain its status in a table in the target database so that it can resume in case of any failures. This recipe explains the steps required to set up a GoldenGate user in the target database.
You must select a database user ID for a target database setup. For example, GGATE_ADMIN,
because the GoldenGate user also requires a table in the target database to maintain its status. It needs some quota assigned on a tablespace to be able to create a table. You might want to create a separate tablespace, grant quota and assign it as default for the GGATE_ADMIN
user. We will assign a GGATE_ADMIN_DAT
tablespace to the GGATE_ADMIN
user in this recipe.
Run the following steps in the target database to set up a GoldenGate user:
sqlplus sys/**** as sysdba CREATE USER GGATE_ADMIN identified by GGATE_ADMIN DEFAULT TABLESPACE GGATE_ADMIN_DAT; ALTER USER GGATE_ADMIN QUOTA UNLIMITED ON GGATE_ADMIN_DAT; GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN; GRANT CONNECT, RESOURCE to GGATE_ADMIN; GRANT SELECT ANY DICTIONARY to GGATE_ADMIN; GRANT SELECT ANY TABLE TO GGATE_ADMIN; GRANT INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE TO GGATE_ADMIN; GRANT CREATE TABLE TO GGATE_ADMIN;
You can use these commands to set up a GoldenGate user in the target database. The GoldenGate user in the target database requires access to the database plus update
/insert
/delete
privileges on the target tables to apply the changes. In the preceding commands, we have granted SELECT ANY TABLE
, UPDATE ANY TABLE
, DELETE ANY TABLE
, and INSERT ANY TABLE
privileges to the GGATE_ADMIN
user. However, if for production database reasons your organization follows the least required privileges policy, you will need to grant these privileges on the replicated target tables individually. If the number of replicated target tables is large, you can use dynamic SQL to generate such commands. In our example demo
database, we can generate these commands for the SCOTT
schema objects as follows:
select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT' COMMAND ------------------------------------------------------------------ GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.DEPT to GGATE_ADMIN; GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.EMP to GGATE_ADMIN; GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.SALGRADE to GGATE_ADMIN; GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.BONUS to GGATE_ADMIN;
The replicated changes are applied to the target database on a row-by-row basis. The Replicat process needs to maintain its status so that it can be resumed in case of failure. The checkpoints can be maintained in a database table or in a file on disk. The best practice is to create a Checkpoint table and use it to maintain the replicat status. This also enhances the performance as the replicat applies the changes to the database using asynchronous COMMIT
with the NOWAIT
option. If you do not use a Checkpoint table, the replicat maintains the checkpoint in a file and applies the changes to the databases using a synchronous COMMIT
with the WAIT
option.
The Manager process is a key process of a GoldenGate configuration. This process is the root of the GoldenGate instance and it must exist at each GoldenGate site. It must be running on each system in the GoldenGate configuration before any other GoldenGate processes can be started. This recipe explains how to create a GoldenGate Manager process in a GoldenGate configuration.
Before setting up a Manager process, you must have installed GoldenGate binaries. A Manager process requires a port number to be defined in its configuration. Ensure that you have chosen the port to be used for the GoldenGate manager instance that you are going to set up.
In order to configure a Manager process, you need to create a configuration file. The following are the steps to create a parameter file for the Manager process:
From the GoldenGate
Home
directory, run the GoldenGate software command line interface (GGSCI):./ggsci
Edit the Manager process configuration as follows:
EDIT PARAMS MGR
This command will open an editor window. You need to add the manager configuration parameters in this window as follows:
PORT <PORT NO> DYNAMICPORTLIST <specification> AUTOSTART ER* AUTORESTART ER*, RETRIES 3, WAITMINUTES 3 PURGEOLDEXTRACTS <specification>
For example:
PORT 7809 DYNAMICPORTLIST 7810-7820, 7830 AUTOSTART ER t* AUTORESTART ER t*, RETRIES 4, WAITMINUTES 4 PURGEOLDEXTRACTS /u01/app/ggate/dirdat/tt*, USECHECKPOINTS, MINKEEPH OURS 2
Save the file and exit the editor window.
Start the Manager process by using the following code:
GGSCI> START MGR
All GoldenGate processes use a parameter file for configuration. In these files various parameters are defined. These parameters control the way the process functions. The steps to create the Manager process are broadly described as follows:
Log in to the GoldenGate command line interface.
Create a parameter file.
Start the Manager process.
When you start the Manager process you will get the following output:
GGSCI (prim1-ol6-112.localdomain) 2> start mgr Manager started.
You can check the status of the Manager process using the
status
command as follows:GGSCI (prim1-ol6-112.localdomain) 3> status mgr Manager is running (IP port prim1-ol6-112.localdomain.7809).
The Manager process performs the following administrative and resource management functions:
Monitor and restart Oracle GoldenGate processes
Issue threshold reports, for example, when throughput slows down or when synchronization latency increases
Maintain trail files and logs
Report errors and events
Receive and route requests from the user interface
The preceding parameters specified are defined as follows:
Port no
: This is the port used by the Manager process itself.Dynamic port list
: Range of ports to be used by other processes in the GoldenGate instance. For example, Extract, Datapump, Replicat, and Collector processes.Autostart ER*
: To start the GoldenGate processes when the Manager process starts.Autorestart ER*
: To restart the GoldenGate process in case it fails. TheRETRIES
option controls the maximum number of restart attempts and theWAITMINUTES
option controls the wait interval between each restart attempt in minutes.Purgeoldextracts
: To configure the automatic maintenance of GoldenGate trail files. The deletion criteria is specified usingMINKEEPHOURS
/MINKEEPFILES
. The GoldenGate Manager process deletes the old trail files which fall out of this criteria.
The Manager process can be configured to perform some more administrative tasks. The following are some other key parameters that can be added to the Manager process configuration:
STARTUPVALIDATONDELAY
(Secs): Use this parameter to set a delay in seconds after which the Manager process checks that the processes are started after it starts up itself.LAGREPORT
: The Manager process writes the lag information of a process to its report file. This parameter controls the interval after which the Manager process performs this function.
A GoldenGate Classic Capture Extract process runs on the source system. This process can be configured for initially loading the source data and for continuous replication. This process reads the redo logs in the source database and looks for changes in the tables that are defined in its configuration file. These changes are then written into a buffer in the memory. When the extract reads a commit
command in the redo logs, the changes for that transaction are then flushed to the trail files on disk. In case it encounters a rollback statement for a transaction in the redo log, it discards the changes from the memory. This type of Extract process is available on all platforms which GoldenGate supports. This process cannot read the changes for compressed objects. In this recipe you will learn how to set up a Classic Capture process in a GoldenGate instance.
Before adding the Classic Capture Extract process, ensure that you have completed the following steps in the source database environment:
Enabled database minimum supplemental logging.
Enabled supplemental logging for tables to be replicated.
Set up a manager instance.
Created a directory for the source trail files.
Decided a two-letter initial for naming the source trail files.
The following are the steps to configure a Classic Capture Extract process in the source database:
From the GoldenGate
Home
directory, run the GoldenGate software command line interface (GGSCI) as follows:./ggsci
Edit the Extract process configuration as follows:
EDIT PARAMS EGGTEST1
This command will open an editor window. You need to add the extract configuration parameters in this window as follows:
EXTRACT <EXTRACT_NAME> USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ****** EXTTRAIL <specification> TABLE <replicated_table_specification>;
For example:
EXTRACT EGGTEST1 USERID GGATE_ADMIN@DBORATEST, PASSWORD ****** EXTTRAIL /u01/app/ggate/dirdat/st TABLE scott.*;
Save the file and exit the editor window.
Add the Classic Capture Extract to the GoldenGate instance as follows:
ADD EXTRACT <EXTRACT_NAME>, TRANLOG, <BEGIN_SPEC>
For example:
ADD EXTRACT EGGTEST1, TRANLOG, BEGIN NOW
Add the local trail to the Classic Capture configuration as follows:
ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
Start the Classic Capture Extract process as follows:
GGSCI> START EXTRACT EGGTEST1
In the preceding steps we have configured a Classic Capture Extract process to replicate all tables for a SCOTT
user. For this we first configure an Extract process parameter file and add the configuration parameter to it. Once the parameter file is created, we then add the Extract process to the source manager instance. This is done using the ADD EXTRACT
command in step 5. In step 6, we associate a local trail file with the Extract process and then we start it. When you start the Extract process you will see the following output:
GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1 Sending START request to MANAGER ... EXTRACT EGGTEST1 starting
You can check the status of the Extract process using the following command:
GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1 EXTRACT EGGTEST1: STARTED
There are a few additional parameters that can be specified in the extract configuration as follows:
EOFDELAY secs
: This parameter controls how often GoldenGate should check the source database redo logs for new dataMEGABYTES <N>
: This parameter controls the size of the extract trail fileDYNAMICRESOLUTION
: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.
If your source database ie this parameter to enable extract to build the metadata for each table when the exs a very busy OLTP production system and you cannot afford to add additional load of GoldenGate process on it, you can however offload GoldenGate processing to another server by adding some extra configuration. You will need to configure the source database to ship the redo logs to a standby site and set up a GoldenGate manager instance on that server. The Extract processes will be configured to read from the archived logs on the standby system. For this you specify an additional parameter as follows:
TRANLOGOPTIONS ARCHIVEDLOGONLY ALTARCHIVEDLOGDEST <path>
The recipe, Configuring an Extract process to read from an Oracle ASM instance and the recipe, Setting up a GoldenGate replication with multiple process groups in Chapter 2, Setting up GoldenGate Replication
Integrated Capture is a new form of GoldenGate Extract process which works directly with the database log mining server to receive the data changes in the form of LCRs. This functionality is based on the Oracle Streams technology. For this, the GoldenGate Admin user requires access to the log miner dictionary objects. This Capture mode supports extracting data from the source databases using compression. It also supports some object types that are not supported by the Classic Capture. In this recipe, you will learn how to set up an Integrated Capture process in a GoldenGate instance.
Before adding the Integrated Capture Extract, ensure that you have completed the following steps in the source database environment:
Enabled database minimum supplemental logging.
Enabled supplemental logging for tables to be replicated.
Set up a manager instance.
Created a directory for source trail files.
Decided a two-letter initial for naming source trail files.
Created a GoldenGate Admin database user with extra privileges required for Integrated Capture in the source database.
You can follow the given steps to configure an Integrated Capture Extract process:
From the GoldenGate
Home
directory, run the GoldenGate software command line interface (GGSCI) as follows:./ggsci
Edit the Extract process configuration as follows:
EDIT PARAMS EGGTEST1
This command will open an editor window. You need to add the extract configuration parameters in this window as follows:
EXTRACT <EXTRACT_NAME> USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ****** TRANLOGOPTIONS MININGUSER <MINING_DB_USER>@MININGDB, & MININGPASSWORD ***** EXTTRAIL <specification> TABLE <replicated_table_specification>;
For example:
EXTRACT EGGTEST1 USERID GGATE_ADMIN@DBORATEST, PASSWORD ****** TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, & MININGPASSWORD ***** EXTTRAIL /u01/app/ggate/dirdat/st TABLE scott.*;
Save the file and exit the editor window.
Register the Integrated Capture Extract process to the database as follows:
DBLOGIN USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ****** MININGDBLOGIN USERID <MININGUSER>@MININGDB, PASSWORD ****** REGISTER EXTRACT <EXTRACT_NAME> DATABASE
Add the Integrated Capture Extract to the GoldenGate instance as follows:
ADD EXTRACT <EXTRACT_NAME>, INTEGRATED TRANLOG, <BEGIN_SPEC>
For example:
ADD EXTRACT EGGTEST1, INTEGRATED TRANLOG, BEGIN NOW
Add the local trail to the Integrated Capture configuration as follows:
ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
Start the Integrated Capture Extract process as follows:
GGSCI> START EXTRACT EGGTEST1
The steps for configuring an Integrated Capture process are broadly the same as the ones for the Classic Capture process. We first create a parameter file in steps 1 to 4. In step 5, we add the extract to the GoldenGate instance. In step 6, we add a local extract trail file and in the next step we start the Extract process.
When you start the Extract process you will see the following output:
GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1 Sending START request to MANAGER ... EXTRACT EGGTEST1 starting
You can check the status of the Extract process using the following command:
GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1 EXTRACT EGGTEST1: RUNNING
As described earlier, an Integrated Capture process can be configured with the mining dictionary in the source database or in a separate database called a downstream mining database. When you configure the Integrated Capture Extract process in the downstream mining database mode, you need to specify the following parameter in the extract configuration file:
TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, MININGPASSWORD *****
You will also need to connect to MININGDB
using MININGUSER
before registering the Extract process:
MININGDBLOGIN USERID <MININGUSER>@MININGDB, PASSWORD ******
This mining user has to be set up in the same way as the GoldenGate Admin user is set up in the source database.
Some additional parameters that should be added to the extract configuration are as follows:
TRANLOGOPTIONS INTEGRATEDPARAMS
: Use this parameter to control how much memory you want to allocate to the log miner dictionary. This memory is allocated out of the Streams pool in the SGA:TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 164)
MEGABYTES <N>
: This parameter controls the size of the extract trail file.DYNAMICRESOLUTION
: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.
The recipe Creating an Integrated Capture with a downstream database for compressed tables in Chapter 7, Advanced Administration Tasks – I
Datapumps are secondary Extract processes which exist only in the GoldenGate source environments. These are optional processes. When the Datapump process is not configured, the Extract process does the job of extracting and transferring the data to the target environment. When the Datapump process is configured, it relieves the main Extract process from the task of transferring the data to the target environment. The Extract process can then solely focus on extracting the changes from the source database redo and write it to local trail files.
Before adding the Datapump extract, you must have a manager instance running. You should have added the main extract and a local trail location to the instance configuration. You will also need the target environment details, for example, hostname, manager port no., and the remote trail file location.
Just like other GoldenGate processes, the Datapump process requires creating a parameter file with some parameters. The following are the steps to configure a Datapump process in a GoldenGate source environment:
From the GoldenGate
Home
, run the GoldenGate Software Command Line Interface (GGSCI) as follows:./ggsci
Edit the Datapump process configuration as follows:
EDIT PARAMS PGGTEST1
This command will open an editor window. You need to add the Datapump configuration parameters in this window as follows:
EXTRACT <DATAPUMP_NAME> USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ****** RMTHOST <HOSTNAME_IP_TARGET_SYSTEM>, MGRPORT <TARGET_MGRPORT> RMTTRAIL <specification> TABLE <replicated_table_specification>;
For example:
EXTRACT PGGTEST1 USERID GGATE_ADMIN@DBORATEST, PASSWORD ****** RMTHOST stdby1-ol6-112.localdomain, MGRPORT 7809 RMTTRAIL /u01/app/ggate/dirdat/rt TABLE scott.*;
Save the file and exit the editor window.
Add the Datapump extract to the GoldenGate instance as follows:
ADD EXTRACT PGGTEST1, EXTTRAILSOURCE /u01/app/ggate/dirdat/tt
Add the remote trail to the Datapump configuration as follows:
ADD RMTTRAIL /u01/app/ggate/dirdat/rt, EXTRACT PGGTEST1
Start the Datapump process as follows:
GGSCI> START EXTRACT PGGTEST1
Once you have added the parameters to the Datapump parameter file and saved it, you need to add the process to the GoldenGate instance. This is done using the ADD EXTRACT
command in step 5. In step 6,, we associate a remote trail with the Datapump process and in step 7 we start the Datapump process. When you start the Datapump process you will see the following output:
GGSCI (prim1-ol6-112.localdomain) 10> start extract PGGTEST1 Sending START request to MANAGER ... EXTRACT PGGTEST1 starting
You can check the status of the Datapump process using the following command:
GGSCI (prim1-ol6-112.localdomain) 10> status extract PGGTEST1 EXTRACT PGGTEST1: RUNNING
The following are some additional parameters/options that can be specified in the datapump configuration:
RMTHOSTOPTIONS
: Using this option for theRMTHOST
parameter, you can configure additional features such as encryption and compression for trail file transfers.EOFDELAY secs
: This parameter controls how often GoldenGate should check the local trail file for new data.MEGABYTES <N>
: This parameter controls the size of a remote trail file.PASSTHRU
: This parameter is used to avoid lookup in database or definitions files in datapump are not doing any conversions and so on.DYNAMICRESOLUTION
: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.
Refer to the recipes, Encrypting database user passwords Encrypting the trail files in Chapter 2, Setting up GoldenGate Replication
The Replicat processes are the delivery processes which are configured in the target environment. These processes read the changes from the trail files on the target system and apply them to the target database objects. If there are any transformations defined in the replicat configuration, the Replicat process takes care of those transformations as well. You can define the mapping information in the replicat configuration. The Replicat process will then apply the changes to the target database based on the mappings.
Before setting up replicat in the target system, you must have configured and started the Manager process.
Follow the following steps to configure a replicat in the target environment:
From the GoldenGate
Home
directory, run the GoldenGate software command line interface (GGSCI) as follows:./ggsci
Log in to the target database through GGSCI as shown in the following code:
GGSCI> DBLOGIN, USERID <USER> PASSWORD <PW>
Add the Checkpoint table as shown in the following code:
GGSCI> ADD CHECKPOINTTABLE <SCHEMA.TABLE>
Edit the Replicat process configuration as shown in the following code:
GGSCI> EDIT PARAMS RGGTEST1
This command will open an editor window. You need to add the replicat configuration parameters in this window as shown in the following code:
REPLICAT <REPLICAT_NAME> USERID <TARGET_GG_USER>@TARGETDB, PASSWORD ****** DISCARDFILE <DISCARDFILE_SPEC> MAP <mapping_specification>;
For example:
REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD ****** DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc, APPEND, MEGABYTES 500 MAP SCOTT.*, SCOTT.*;
Save the file and exit the editor.
Add the replicat to the GoldenGate instance as shown in the following code:
GGSCI> ADD REPLICAT <REPLICAT> EXTTRAIL <PATH>
For example:
ADD REPLICAT RGGTEST1, EXTTRAIL /u01/app/ggate/dirdat/rt
Start the Replicat process as shown in the following code:
GGSCI> START REPLICAT <REPLICAT>
In the preceding procedure we first create a Checkpoint table in the target database. As the name suggests, the Replicat process uses this table to maintain its checkpoints. In case the Replicat process crashes and it is restarted, it can read this Checkpoint table and start applying the changes from the point where it left.
Once you have added a Checkpoint table, you need to create a parameter file for the Replicat process. Once the process parameter file is created, it is then added to the GoldenGate instance. At this point, we are ready to start the Replicat process and apply the changes to the target database. You should see an output similar to the following:
GGSCI (stdby1-ol6-112.localdomain) 10> start replicat RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting
You can check the status of the Replicat process using the following command:
GGSCI (stdby1-ol6-112.localdomain) 10> status replicat RGGTEST1 REPLICAT RGGTEST1: RUNNING
Following are the common parameters that are specified in the replicat configuration:
DISCARDFILE
: This parameter is used to specify the name of the discard file. If the Replicat process is unable to apply any changes to the target database due to any errors, it writes the record to the discard file.EOFDELAY
secs
: This parameter controls how often GoldenGate should check the local trail file for new data.REPORTCOUNT
: This parameter controls how often the Replicat process writes its progress to the report file.BATCHSQL
: This parameter is used to specify theBATCHSQL
mode for replicat.ASSUMETARGETDEFS
: This parameter tells the Replicat process to assume that the source and target database object structures are the same.