Home Data Oracle Goldengate 11g Complete Cookbook

Oracle Goldengate 11g Complete Cookbook

By Ankur Gupta
books-svg-icon Book
eBook $45.99 $31.99
Print $76.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $45.99 $31.99
Print $76.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Installation and Initial Setup
About this book
Oracle Goldengate 11g Complete Cookbook is your complete guide to all aspects of Goldengate administration. The recipes in this book will teach you how to setup Goldengate configurations for simple and complex environments requiring various filtering and transformations. It also covers various aspects of tuning and troubleshooting the replication setups using exception handling, custom fields, and logdump utility.The book begins by explaining some basic tasks like Installation and Process groups setup. You will then be introduced to some further topics including DDL replication and various options to perform Initial Loads. You will then learn some advanced administration tasks such as Multi Master replication setup and conflict resolution. Further recipes, contain the cross platform replication and high availability options for Goldengate.
Publication date:
September 2013
Publisher
Packt
Pages
362
ISBN
9781849686143

 

Chapter 1. Installation and Initial Setup

The following recipes will be covered in this chapter:

  • Installing Oracle GoldenGate in a x86_64 Linux-based environment

  • Installing Oracle GoldenGate in a Windows environment

  • Enabling supplemental logging in the source database

  • Supported datatypes in Oracle GoldenGate

  • Preparing the source database for GoldenGate setup

  • Preparing the target database for GoldenGate setup

  • Setting up a Manager process

  • Setting up a Classic Capture Extract process

  • Setting up an Integrated Capture Extract process

  • Setting up a Datapump process

  • Setting up a Replicat process

 

Introduction


Database replication is always an interesting challenge. It requires a complex setup and strong knowledge of the underlying infrastructure, databases, and the data held in them to replicate the data efficiently without much impact on the enterprise system. Oracle GoldenGate gains a lot of its popularity from the simplicity in its setup. In this chapter we will cover the basic steps to install GoldenGate and set up various processes.

 

Installing Oracle GoldenGate in a x86_64 Linux-based environment


This recipe will show you how to install Oracle GoldenGate in a x86_64 Linux-based environment.

Getting ready

In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Linux platform. We have downloaded Oracle GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.

Tip

You can find the Oracle GoldenGate binaries for x86_64 Linux at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.

How to do it...

Oracle GoldenGate binaries are installed in a directory called GoldenGate Home. This directory should be owned by the OS user (ggate) which will be the owner of GoldenGate binaries. This user must be a member of the dba group. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility as given in the following steps:

  1. Log in to the server using the ggate account.

  2. Create a directory with this user as shown in the following command:

    mkdir installation_directory
    
  3. Change the directory to the location where you have copied the media pack file and unzip it. The media pack contains the readme files and the GoldenGate binaries file. The GoldenGate binaries file for the 64-bit x86 Linux platform is called fbs_ggs_Linux_x64_ora11g_64bit.tar.

  4. Extract the contents of this file into the GoldenGate Home directory as shown in the following command:

    tar –xvf fbs_ggs_Linux_x64_ora11g_64bit.tar –C installation_directory
    
  5. Create GoldenGate directories as follows:

    cd installation_directory
    ./ggsci
    create subdirs
    exit
    

    Note

    You must have Oracle database libraries added to the shared library environment variable, $LD_LIBRARY_PATH before you run ggsci. It is also recommended to have $ORACLE_HOME & $ORACLE_SID set to the correct Oracle instance.

How it works...

Oracle provides GoldenGate binaries in a compressed format. In order to install the binaries you unzip the compressed file, and then expand the archive file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home which are not created by default. These directories are created using the CREATE SUBDIRS command. The following is the list of the subdirectories that get created with this command:

Subdirectory

Contents

dirprm

It contains parameter files

dirrpt

It contains report files

dirchk

It contains checkpoint files

dirpcs

It contains process status files

dirsql

It contains SQL scripts

dirdef

It contains database definitions

dirdat

It contains trail files

dirtmp

It contains temporary files

dirout

It contains output files

Note

Oracle GoldenGate binaries need to be installed on both the source and target systems. The procedure for installing the binaries is the same in both environments.

 

Installing Oracle GoldenGate in a Windows environment


In this recipe we will go through the steps that should be followed to install the GoldenGate binaries in the Windows environment.

Getting ready

In order to install Oracle GoldenGate, we must have downloaded the binaries from the Oracle Technology Network website for your Windows platform. We have downloaded GoldenGate Version 11.2.0.1.0.1 in this recipe. Ensure that you check the checksum of the file once you have downloaded it.

Tip

You can find the Oracle GoldenGate binaries for x86_64 Windows at http://www.oracle.com/technetwork/middleware/GoldenGate/downloads/index.html?ssSourceSiteId=ocomen.

How to do it...

Oracle GoldenGate binaries are installed in a directory called GoldenGate Home. After you have downloaded the binaries, you need to uncompress the media pack file by using the unzip utility:

  1. Log in to the server as the Administrator user.

  2. Create a directory for GoldenGate Home.

  3. Unzip the contents of the media pack file to the GoldenGate Home directory.

  4. Create GoldenGate directories as shown in the following command:

    cd installation_directory
    ggsci
    create subdirs
    exit
    

How it works...

Oracle provides GoldenGate binaries in a compressed format. The installation involves unzipping the file into a required directory. This unpacks all the binaries. However, GoldenGate also requires some important subdirectories under GoldenGate Home which are not created by default. These directories are created using the CREATE SUBDIRS command. The following is the list of the subdirectories that get created with this command:

Subdirectory

Contents

dirprm

It contains parameter files

Dirrpt

It contains report files

Dirchk

It contains checkpoint files

dirpcs

It contains process status files

dirsql

It contains SQL scripts

dirdef

It contains database definitions

dirdat

It contains trail files

dirtmp

It contains temporary files

dirout

It contains output files

 

Enabling supplemental logging in the source database


Oracle GoldenGate replication can be used to continuously replicate the changes from the source database to the target database. GoldenGate mines the redo information generated in the source database to extract the changes. In order to update the correct rows in the target database, Oracle needs sufficient information to be able to identify them uniquely. Since it relies on the information extracted from the redo buffers, it requires extra information columns to be logged into the redo records generated in the source database. This is done by enabling supplemental logging in the source database. This recipe explains how to enable supplemental logging in the source database.

Getting ready

We must have a list of the tables that we want to replicate between two environments.

How to do it…

Oracle GoldenGate requires supplemental logging to be enabled at the database level and table level. Use the following steps to enable the required supplemental logging:

  1. Enable database supplemental logging through sqlplus as follows:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
  2. Switch a database LOGFILE to bring the changes into effect:

    ALTER DATABASE SWITCH LOGFILE;
    
  3. From the GoldenGate Home, log in to GGSCI:

    ./ggsci
    
  4. Log in to the source database from ggsci using a user which has privileges to alter the source schema tables as shown in the following command:

    GGSCI> DBLOGIN USERID <USER> PASSWORD <PW>
    
  5. Enable supplemental logging at the table level as follows:

    GGSCI> ADD TRANDATA <SCHEMA>.<TABLE_NAME>
    
  6. Repeat step 5 for all the tables that you want to replicate using GoldenGate.

How it works…

Supplemental logging enables the database to add extra columns in the redo data that is required by GoldenGate to correctly identify the rows in the target database. We must enable database-level minimum supplemental logging before we can enable it at the table level. When we enable it at the table level, a supplemental log group is created for the table that consists of the columns on which supplemental logging is enabled. The columns which form a part of this group are decided based on the key constraints present on the table. These columns are decided based on the following priority order:

  1. Primary key

  2. First unique key alphanumerically with no nullable columns

  3. First unique key alphanumerically with nullable columns

  4. All columns

GoldenGate only considers unique keys which don't have any virtual columns, any user-defined types, or any function-based columns. We can also manually specify which columns we want to be a part of the supplemental log group.

Tip

You can enable supplemental logging on all tables of a schema using the following single command:

GGSCI> ADD TRANDATA <SCHEMA>.*

If possible, do create a primary key in each source and target table that is part of the replication. The pseudo key consisting of all columns, created by GoldenGate, can be quite inefficient.

There's more…

There are two ways to enable supplemental logging. The first method is to enable it using GGSCI, using the ADD TRANDATA command. The second method is to use sqlplus and run the ALTER TABLE ADD SUPPLEMENTAL LOG DATA command. The latter method is more flexible and allows a person to specify the name of the supplemental log group. However, when you use Oracle GoldenGate to add supplemental logging it creates supplemental log group names using the format, GGS_<TABLE_NAME>_<OBJECT_NUMBER>. If the overall supplemental log group name is longer than 30 characters, GoldenGate truncates the table name as required. Oracle support recommends that we use the first method for enabling supplemental logging for objects to be replicated using Oracle GoldenGate. The GGS_* supplemental log group format enables GoldenGate to quickly identify the supplemental log groups in the database.

If you are planning to use GoldenGate to capture all transactions in the source database and convert them into INSERT for the target database, for example, for reporting/auditing purposes, you'll need to enable supplemental logging on all columns of the source database tables.

See also

  • For information about how to replicate changes to a target database and maintain an audit record, refer to the recipe Mapping the changes to a target table and storing the transaction history in a history table in Chapter 4, Mapping and Manipulating Data

 

Supported datatypes in Oracle GoldenGate


Oracle GoldenGate has some restrictions in terms of what it can replicate. With every new release, Oracle is adding new datatypes to the list of what is supported. The list of the datatypes of the objects that you are planning to replicate should be checked against the list of supported datatypes for the GoldenGate version that you are planning to install.

Getting ready

You should have identified the various datatypes of the objects that you plan to replicate.

How to do it…

The following is a high-level list of the datatypes that are supported by Oracle GoldenGate v11.2.1.0.1:

  • NUMBER

  • BINARY FLOAT

  • BINARY DOUBLE

  • CHAR

  • VARCHAR2

  • LONG

  • NCHAR

  • NVARCHAR2

  • RAW

  • LONG RAW

  • DATE

  • TIMESTAMP

  • CLOB

  • NCLOB

  • BLOB

  • SECUREFILE and BASICFILE

  • XML datatypes

  • User defined/Abstract datatypes

  • SDO_GEOMETRY, SDO_TOPO_GEOMETRY, and SDO_GEORASTER are supported

How it works…

There are some additional details that one needs to consider while evaluating the supported datatypes for a GoldenGate version. For example, the user-defined datatypes are only supported if the source and target tables have the same structures. Both Classic and Integrated Capture modes support XML types which are stored as XML, CLOB, and XML binary. However, XML type tables stored as Object Relational are only supported in Integrated Capture mode.

There's more…

The support restrictions apply to a few other factors apart from the datatypes. Some of these are as Manipulating Data:

  • INSERTs, UPDATEs and DELETEs are supported on regular tables, IOTs, clustered tables and materialized views

  • Tables created as EXTERNAL are not supported

  • Extraction from compressed tables is supported only in Integrated Capture mode

  • Materialized views created with ROWID are not supported

  • Oracle GoldenGate supports replication of the sequences only in uni-directional mode

 

Preparing the source database for GoldenGate setup


Oracle GoldenGate architecture consists of Extract process in the source database. This process mines the redo information and extracts the changes occurring in the source database objects. These changes are then written to the trail files. There are two types of Extract processes – Classic Capture and Integrated Capture. The Extract process requires some setup to be done in the source database. Some of the steps in the setup are different depending on the type of the Extract process. GoldenGate requires a database user to be created in the source database and various privileges to be granted to this user. This recipe explains how to set up a source database for GoldenGate replication.

Getting ready

You must select a database user ID for the source database setup. For example, GGATE_ADMIN.

How to do it…

Run the following steps in the source database to set up the GoldenGate user as follows:

sqlplus sys/**** as sysdba
CREATE USER GGATE_ADMIN identified by GGATE_ADMIN;
GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN;
GRANT ALTER SYSTEM TO GGATE_ADMIN;
GRANT CONNECT, RESOURCE to GGATE_ADMIN;
GRANT SELECT ANY DICTIONARY to GGATE_ADMIN;
GRANT FLASHBACK ANY TABLE to GGATE_ADMIN;
GRANT SELECT ANY TABLE TO GGATE_ADMIN;
GRANT SELECT ON DBA_CLUSTERS TO GGATE_ADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO GGATE_ADMIN;
GRANT SELECT ANY TRANSACTION To GGATE_ADMIN;

The following steps are only required for Integrated Capture Extract (Version 11.2.0.2 or higher):

EXEC DBMS_GoldenGate_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN');
GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;

The following steps are only required for Integrated Capture Extract (Version 11.2.0.1 or earlier):

EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN');
GRANT BECOME USER TO GGATE_ADMIN;
GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;

Set up a TNS Entry for the source database in $ORACLE_HOME/network/admin/tnsnames.ora.

How it works…

The preceding commands can be used to set up the GoldenGate user in the source database. The Integrated Capture required some additional privileges as it needs to interact with the database log mining server.

You will notice that in the previous commands, we have granted SELECT ANY TABLE to the GGATE_ADMIN user. In production environments, where least required privileges policies are followed, it is quite unlikely that such a setup would be approved by the compliance team. In such cases, instead of granting this privilege, you can grant the SELECT privilege on individual tables that are a part of the source replication configuration. You can use dynamic SQL to generate such commands.

In our example schema database, we can generate the commands for all tables owned by the user SCOTT as follows:

select 'GRANT SELECT ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT'
COMMAND
------------------------------------------------------------------
GRANT SELECT ON SCOTT.DEPT to GGATE_ADMIN;
GRANT SELECT ON SCOTT.EMP to GGATE_ADMIN;
GRANT SELECT ON SCOTT.BONUS to GGATE_ADMIN;
GRANT SELECT ON SCOTT.SALGRADE to GGATE_ADMIN;

There's more…

In this recipe we saw the steps required to set up a the GoldenGate user in the database. The Extract process required various privileges to be able to mine the changes from the redo data. At this stage it's worth discussing the two types of Extract processes and the differences between both.

The Classic Capture mode

The Classic Capture mode is the traditional Extract process that has been there for a while. In this mode, GoldenGate accesses the database redo logs (also, archive logs for older transactions) to capture the DML changes occurring on the objects specified in the configuration files. For this, at the OS level, the GoldenGate user must be a part of the same database group which owns the database redo logs. If the redo logs of the source database are stored in an ASM diskgroup this capture method reads it from there. This capture mode is available for other RDBMS as well. However, there are some datatypes that are not supported in Classic Capture mode. One of the biggest limitations of the Classic Capture mode is its inability to read data from the compressed tables/tablespaces.

The Integrated Capture mode

In case of the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs). An LCR is a message with a specific format that describes a database change. This mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC. This feature is only available for databases on Version 11.2.0.3 or higher. This Capture mode supports extracting data from source databases using compression. It also supports various object types which were previously not supported by Classic Capture.

Integrated Capture can be configured in an online or downstream mode. In the online mode, the log miner database is configured in the source database itself. In the downstream mode, the log miner database is configured in a separate database which receives archive logs from the source database. This mode offloads the log mining load from the source database and is quite suitable for very busy production databases. If you want to use the Integrated Capture mode with a source database Version 11.2.0.2 or earlier, you must configure the Integrated Capture mode in downstream capture topology, and the downstream mining database must be on Version 11.2.0.3 or higher.

Tip

You will need to apply a Bundle Patch specified in MOS Note 1411356.1 for full support of the datatypes offered by Integrated Capture.

See also

  • Refer to the recipe S etting up an Integrated Capture Extract process later in this chapter and Creating an Integrated Capture with a downstream database for compressed tables in Chapter 7, Advanced Administration Tasks – I

 

Preparing the target database for GoldenGate setup


On the target side of the GoldenGate architecture, the collector processes receive the trail files shipped by the Extract/Datapump processes from the source environment. The collector process receives these files and writes them locally on the target server. For each row that gets updated in the source database, the Extract process generates a record and writes it to the trail file. The Replicat process in the target environment reads these trail files and applies the changes to the target database using native SQL calls. To be able to apply these changes to the target tables, GoldenGate requires a database user to be set up in the target database with some privileges on the target objects. The Replicat process also needs to maintain its status in a table in the target database so that it can resume in case of any failures. This recipe explains the steps required to set up a GoldenGate user in the target database.

Getting ready

You must select a database user ID for a target database setup. For example, GGATE_ADMIN, because the GoldenGate user also requires a table in the target database to maintain its status. It needs some quota assigned on a tablespace to be able to create a table. You might want to create a separate tablespace, grant quota and assign it as default for the GGATE_ADMIN user. We will assign a GGATE_ADMIN_DAT tablespace to the GGATE_ADMIN user in this recipe.

How to do it…

Run the following steps in the target database to set up a GoldenGate user:

sqlplus sys/**** as sysdba
CREATE USER GGATE_ADMIN identified by GGATE_ADMIN DEFAULT TABLESPACE GGATE_ADMIN_DAT;
ALTER USER GGATE_ADMIN QUOTA UNLIMITED ON GGATE_ADMIN_DAT;
GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN;
GRANT CONNECT, RESOURCE to GGATE_ADMIN;
GRANT SELECT ANY DICTIONARY to GGATE_ADMIN;
GRANT SELECT ANY TABLE TO GGATE_ADMIN;
GRANT INSERT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE TO GGATE_ADMIN;
GRANT CREATE TABLE TO GGATE_ADMIN;

How it works…

You can use these commands to set up a GoldenGate user in the target database. The GoldenGate user in the target database requires access to the database plus update/insert/delete privileges on the target tables to apply the changes. In the preceding commands, we have granted SELECT ANY TABLE, UPDATE ANY TABLE, DELETE ANY TABLE, and INSERT ANY TABLE privileges to the GGATE_ADMIN user. However, if for production database reasons your organization follows the least required privileges policy, you will need to grant these privileges on the replicated target tables individually. If the number of replicated target tables is large, you can use dynamic SQL to generate such commands. In our example demo database, we can generate these commands for the SCOTT schema objects as follows:

select 'GRANT SELECT, INSERT, UPDATE, DELETE ON '||owner||'.'||table_name||' to GGATE_ADMIN;' COMMAND from dba_tables where owner='SCOTT'
COMMAND
------------------------------------------------------------------
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.DEPT to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.EMP to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.SALGRADE to GGATE_ADMIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON SCOTT.BONUS to GGATE_ADMIN;

There's more…

The replicated changes are applied to the target database on a row-by-row basis. The Replicat process needs to maintain its status so that it can be resumed in case of failure. The checkpoints can be maintained in a database table or in a file on disk. The best practice is to create a Checkpoint table and use it to maintain the replicat status. This also enhances the performance as the replicat applies the changes to the database using asynchronous COMMIT with the NOWAIT option. If you do not use a Checkpoint table, the replicat maintains the checkpoint in a file and applies the changes to the databases using a synchronous COMMIT with the WAIT option.

 

Setting up a Manager process


The Manager process is a key process of a GoldenGate configuration. This process is the root of the GoldenGate instance and it must exist at each GoldenGate site. It must be running on each system in the GoldenGate configuration before any other GoldenGate processes can be started. This recipe explains how to create a GoldenGate Manager process in a GoldenGate configuration.

Getting ready

Before setting up a Manager process, you must have installed GoldenGate binaries. A Manager process requires a port number to be defined in its configuration. Ensure that you have chosen the port to be used for the GoldenGate manager instance that you are going to set up.

How to do it…

In order to configure a Manager process, you need to create a configuration file. The following are the steps to create a parameter file for the Manager process:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI):

    ./ggsci
    
  2. Edit the Manager process configuration as follows:

    EDIT PARAMS MGR
    
  3. This command will open an editor window. You need to add the manager configuration parameters in this window as follows:

    PORT <PORT NO>
    DYNAMICPORTLIST <specification>
    AUTOSTART ER*
    AUTORESTART ER*, RETRIES 3, WAITMINUTES 3
    PURGEOLDEXTRACTS <specification>
    

    For example:

    PORT 7809
    DYNAMICPORTLIST 7810-7820, 7830
    AUTOSTART ER t*
    AUTORESTART ER t*, RETRIES 4, WAITMINUTES 4
    PURGEOLDEXTRACTS /u01/app/ggate/dirdat/tt*, USECHECKPOINTS, MINKEEPH
    OURS 2
    
  4. Save the file and exit the editor window.

  5. Start the Manager process by using the following code:

    GGSCI> START MGR
    

How it works…

All GoldenGate processes use a parameter file for configuration. In these files various parameters are defined. These parameters control the way the process functions. The steps to create the Manager process are broadly described as follows:

  1. Log in to the GoldenGate command line interface.

  2. Create a parameter file.

  3. Start the Manager process.

  4. When you start the Manager process you will get the following output:

    GGSCI (prim1-ol6-112.localdomain) 2> start mgr
    Manager started.
    

    You can check the status of the Manager process using the status command as follows:

    GGSCI (prim1-ol6-112.localdomain) 3> status mgr
    Manager is running (IP port prim1-ol6-112.localdomain.7809).
    

The Manager process performs the following administrative and resource management functions:

  • Monitor and restart Oracle GoldenGate processes

  • Issue threshold reports, for example, when throughput slows down or when synchronization latency increases

  • Maintain trail files and logs

  • Report errors and events

  • Receive and route requests from the user interface

The preceding parameters specified are defined as follows:

  • Port no: This is the port used by the Manager process itself.

  • Dynamic port list: Range of ports to be used by other processes in the GoldenGate instance. For example, Extract, Datapump, Replicat, and Collector processes.

  • Autostart ER*: To start the GoldenGate processes when the Manager process starts.

  • Autorestart ER*: To restart the GoldenGate process in case it fails. The RETRIES option controls the maximum number of restart attempts and the WAITMINUTES option controls the wait interval between each restart attempt in minutes.

  • Purgeoldextracts: To configure the automatic maintenance of GoldenGate trail files. The deletion criteria is specified using MINKEEPHOURS/MINKEEPFILES. The GoldenGate Manager process deletes the old trail files which fall out of this criteria.

There's more…

The Manager process can be configured to perform some more administrative tasks. The following are some other key parameters that can be added to the Manager process configuration:

  • STARTUPVALIDATONDELAY(Secs): Use this parameter to set a delay in seconds after which the Manager process checks that the processes are started after it starts up itself.

  • LAGREPORT: The Manager process writes the lag information of a process to its report file. This parameter controls the interval after which the Manager process performs this function.

 

Setting up a Classic Capture Extract process


A GoldenGate Classic Capture Extract process runs on the source system. This process can be configured for initially loading the source data and for continuous replication. This process reads the redo logs in the source database and looks for changes in the tables that are defined in its configuration file. These changes are then written into a buffer in the memory. When the extract reads a commit command in the redo logs, the changes for that transaction are then flushed to the trail files on disk. In case it encounters a rollback statement for a transaction in the redo log, it discards the changes from the memory. This type of Extract process is available on all platforms which GoldenGate supports. This process cannot read the changes for compressed objects. In this recipe you will learn how to set up a Classic Capture process in a GoldenGate instance.

Getting ready

Before adding the Classic Capture Extract process, ensure that you have completed the following steps in the source database environment:

  1. Enabled database minimum supplemental logging.

  2. Enabled supplemental logging for tables to be replicated.

  3. Set up a manager instance.

  4. Created a directory for the source trail files.

  5. Decided a two-letter initial for naming the source trail files.

How to do it…

The following are the steps to configure a Classic Capture Extract process in the source database:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Extract process configuration as follows:

    EDIT PARAMS EGGTEST1
    
  3. This command will open an editor window. You need to add the extract configuration parameters in this window as follows:

    EXTRACT <EXTRACT_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    EXTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT EGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    EXTTRAIL /u01/app/ggate/dirdat/st
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Add the Classic Capture Extract to the GoldenGate instance as follows:

    ADD EXTRACT <EXTRACT_NAME>, TRANLOG, <BEGIN_SPEC>
    

    For example:

    ADD EXTRACT EGGTEST1, TRANLOG, BEGIN NOW
    
  6. Add the local trail to the Classic Capture configuration as follows:

    ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
    
  7. Start the Classic Capture Extract process as follows:

    GGSCI> START EXTRACT EGGTEST1
    

How it works…

In the preceding steps we have configured a Classic Capture Extract process to replicate all tables for a SCOTT user. For this we first configure an Extract process parameter file and add the configuration parameter to it. Once the parameter file is created, we then add the Extract process to the source manager instance. This is done using the ADD EXTRACT command in step 5. In step 6, we associate a local trail file with the Extract process and then we start it. When you start the Extract process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1
Sending START request to MANAGER ...
EXTRACT EGGTEST1 starting

You can check the status of the Extract process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1
EXTRACT EGGTEST1: STARTED

There's more…

There are a few additional parameters that can be specified in the extract configuration as follows:

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the source database redo logs for new data

  • MEGABYTES <N>: This parameter controls the size of the extract trail file

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

If your source database ie this parameter to enable extract to build the metadata for each table when the exs a very busy OLTP production system and you cannot afford to add additional load of GoldenGate process on it, you can however offload GoldenGate processing to another server by adding some extra configuration. You will need to configure the source database to ship the redo logs to a standby site and set up a GoldenGate manager instance on that server. The Extract processes will be configured to read from the archived logs on the standby system. For this you specify an additional parameter as follows:

TRANLOGOPTIONS ARCHIVEDLOGONLY ALTARCHIVEDLOGDEST <path>

Tip

If you are using Classic Capture in ALO mode for the source database using ASM, you must store the archive log files on the standby server outside ASM to allow Classic Capture Extract to read them.

See also

  • The recipe, Configuring an Extract process to read from an Oracle ASM instance and the recipe, Setting up a GoldenGate replication with multiple process groups in Chapter 2, Setting up GoldenGate Replication

 

Setting up an Integrated Capture Extract process


Integrated Capture is a new form of GoldenGate Extract process which works directly with the database log mining server to receive the data changes in the form of LCRs. This functionality is based on the Oracle Streams technology. For this, the GoldenGate Admin user requires access to the log miner dictionary objects. This Capture mode supports extracting data from the source databases using compression. It also supports some object types that are not supported by the Classic Capture. In this recipe, you will learn how to set up an Integrated Capture process in a GoldenGate instance.

Getting ready

Before adding the Integrated Capture Extract, ensure that you have completed the following steps in the source database environment:

  1. Enabled database minimum supplemental logging.

  2. Enabled supplemental logging for tables to be replicated.

  3. Set up a manager instance.

  4. Created a directory for source trail files.

  5. Decided a two-letter initial for naming source trail files.

  6. Created a GoldenGate Admin database user with extra privileges required for Integrated Capture in the source database.

How to do it…

You can follow the given steps to configure an Integrated Capture Extract process:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Extract process configuration as follows:

    EDIT PARAMS EGGTEST1
    
  3. This command will open an editor window. You need to add the extract configuration parameters in this window as follows:

    EXTRACT <EXTRACT_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    TRANLOGOPTIONS MININGUSER <MINING_DB_USER>@MININGDB, &
    MININGPASSWORD *****
    EXTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT EGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, &
    MININGPASSWORD *****
    EXTTRAIL /u01/app/ggate/dirdat/st
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Register the Integrated Capture Extract process to the database as follows:

    DBLOGIN USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    MININGDBLOGIN USERID 
    <MININGUSER>@MININGDB, PASSWORD ******
    REGISTER EXTRACT <EXTRACT_NAME> DATABASE
    
  6. Add the Integrated Capture Extract to the GoldenGate instance as follows:

    ADD EXTRACT <EXTRACT_NAME>, INTEGRATED TRANLOG, <BEGIN_SPEC>
    

    For example:

    ADD EXTRACT EGGTEST1, INTEGRATED TRANLOG, BEGIN NOW
    
  7. Add the local trail to the Integrated Capture configuration as follows:

    ADD EXTTRAIL /u01/app/ggate/dirdat/st, EXTRACT EGGTEST1
    
  8. Start the Integrated Capture Extract process as follows:

    GGSCI> START EXTRACT EGGTEST1
    

How it works…

The steps for configuring an Integrated Capture process are broadly the same as the ones for the Classic Capture process. We first create a parameter file in steps 1 to 4. In step 5, we add the extract to the GoldenGate instance. In step 6, we add a local extract trail file and in the next step we start the Extract process.

When you start the Extract process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 11> start extract EGGTEST1
Sending START request to MANAGER ...
EXTRACT EGGTEST1 starting

You can check the status of the Extract process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract EGGTEST1
EXTRACT EGGTEST1: RUNNING

As described earlier, an Integrated Capture process can be configured with the mining dictionary in the source database or in a separate database called a downstream mining database. When you configure the Integrated Capture Extract process in the downstream mining database mode, you need to specify the following parameter in the extract configuration file:

TRANLOGOPTIONS MININGUSER OGGMIN@MININGDB, MININGPASSWORD *****

You will also need to connect to MININGDB using MININGUSER before registering the Extract process:

MININGDBLOGIN USERID <MININGUSER>@MININGDB, PASSWORD ******

This mining user has to be set up in the same way as the GoldenGate Admin user is set up in the source database.

Tip

If you want to use Integrated Capture mode with a source database which is running on Oracle database Version 11.2.0.2 or earlier, you must configure the Integrated Capture process in the downstream mining database mode and the downstream database must be on Version 11.2.0.3 or higher.

There's more…

Some additional parameters that should be added to the extract configuration are as follows:

  • TRANLOGOPTIONS INTEGRATEDPARAMS: Use this parameter to control how much memory you want to allocate to the log miner dictionary. This memory is allocated out of the Streams pool in the SGA:

    TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 164)
    
  • MEGABYTES <N>: This parameter controls the size of the extract trail file.

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

See also

 

Setting up a Datapump process


Datapumps are secondary Extract processes which exist only in the GoldenGate source environments. These are optional processes. When the Datapump process is not configured, the Extract process does the job of extracting and transferring the data to the target environment. When the Datapump process is configured, it relieves the main Extract process from the task of transferring the data to the target environment. The Extract process can then solely focus on extracting the changes from the source database redo and write it to local trail files.

Getting ready

Before adding the Datapump extract, you must have a manager instance running. You should have added the main extract and a local trail location to the instance configuration. You will also need the target environment details, for example, hostname, manager port no., and the remote trail file location.

How to do it…

Just like other GoldenGate processes, the Datapump process requires creating a parameter file with some parameters. The following are the steps to configure a Datapump process in a GoldenGate source environment:

  1. From the GoldenGate Home, run the GoldenGate Software Command Line Interface (GGSCI) as follows:

    ./ggsci
    
  2. Edit the Datapump process configuration as follows:

    EDIT PARAMS PGGTEST1
    
  3. This command will open an editor window. You need to add the Datapump configuration parameters in this window as follows:

    EXTRACT <DATAPUMP_NAME>
    USERID <SOURCE_GG_USER>@SOURCEDB, PASSWORD ******
    RMTHOST <HOSTNAME_IP_TARGET_SYSTEM>, MGRPORT <TARGET_MGRPORT>
    RMTTRAIL <specification>
    TABLE <replicated_table_specification>;
    

    For example:

    EXTRACT PGGTEST1
    USERID GGATE_ADMIN@DBORATEST, PASSWORD ******
    RMTHOST stdby1-ol6-112.localdomain, MGRPORT 7809
    RMTTRAIL /u01/app/ggate/dirdat/rt
    TABLE scott.*;
    
  4. Save the file and exit the editor window.

  5. Add the Datapump extract to the GoldenGate instance as follows:

    ADD EXTRACT PGGTEST1, EXTTRAILSOURCE /u01/app/ggate/dirdat/tt
    
  6. Add the remote trail to the Datapump configuration as follows:

    ADD RMTTRAIL /u01/app/ggate/dirdat/rt, EXTRACT PGGTEST1
    
  7. Start the Datapump process as follows:

    GGSCI> START EXTRACT PGGTEST1
    

How it works…

Once you have added the parameters to the Datapump parameter file and saved it, you need to add the process to the GoldenGate instance. This is done using the ADD EXTRACT command in step 5. In step 6,, we associate a remote trail with the Datapump process and in step 7 we start the Datapump process. When you start the Datapump process you will see the following output:

GGSCI (prim1-ol6-112.localdomain) 10> start extract PGGTEST1
Sending START request to MANAGER ...
EXTRACT PGGTEST1 starting

You can check the status of the Datapump process using the following command:

GGSCI (prim1-ol6-112.localdomain) 10> status extract PGGTEST1
EXTRACT PGGTEST1: RUNNING

Tip

If you are using virtual IPs in your environment for the target host, always configure the virtual IP in the datapump RMTHOST configuration. This virtual IP should also be resolved through DNS. This will ensure automatic discovery while configuring monitoring for GoldenGate configurations.

There's more…

The following are some additional parameters/options that can be specified in the datapump configuration:

  • RMTHOSTOPTIONS: Using this option for the RMTHOST parameter, you can configure additional features such as encryption and compression for trail file transfers.

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the local trail file for new data.

  • MEGABYTES <N>: This parameter controls the size of a remote trail file.

  • PASSTHRU: This parameter is used to avoid lookup in database or definitions files in datapump are not doing any conversions and so on.

  • DYNAMICRESOLUTION: Use this parameter to enable extract to build the metadata for each table when the extract encounters its changes for the first time.

See also

  • Refer to the recipes, Encrypting database user passwords Encrypting the trail files in Chapter 2, Setting up GoldenGate Replication

 

Setting up a Replicat process


The Replicat processes are the delivery processes which are configured in the target environment. These processes read the changes from the trail files on the target system and apply them to the target database objects. If there are any transformations defined in the replicat configuration, the Replicat process takes care of those transformations as well. You can define the mapping information in the replicat configuration. The Replicat process will then apply the changes to the target database based on the mappings.

Getting ready

Before setting up replicat in the target system, you must have configured and started the Manager process.

How to do it…

Follow the following steps to configure a replicat in the target environment:

  1. From the GoldenGate Home directory, run the GoldenGate software command line interface (GGSCI) as follows:

    ./ggsci
    
  2. Log in to the target database through GGSCI as shown in the following code:

    GGSCI> DBLOGIN, USERID <USER> PASSWORD <PW>
    
  3. Add the Checkpoint table as shown in the following code:

    GGSCI> ADD CHECKPOINTTABLE <SCHEMA.TABLE>
    
  4. Edit the Replicat process configuration as shown in the following code:

    GGSCI> EDIT PARAMS RGGTEST1
    
  5. This command will open an editor window. You need to add the replicat configuration parameters in this window as shown in the following code:

    REPLICAT <REPLICAT_NAME>
    USERID <TARGET_GG_USER>@TARGETDB, PASSWORD ******
    DISCARDFILE <DISCARDFILE_SPEC>
    MAP <mapping_specification>;
    

    For example:

    REPLICAT RGGTEST1
    USERID GGATE_ADMIN@TGORTEST, PASSWORD ******
    DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc, APPEND, MEGABYTES 500
    MAP SCOTT.*, SCOTT.*;
    
  6. Save the file and exit the editor.

  7. Add the replicat to the GoldenGate instance as shown in the following code:

    GGSCI> ADD REPLICAT <REPLICAT> EXTTRAIL <PATH>
    

    For example:

    ADD REPLICAT RGGTEST1, EXTTRAIL /u01/app/ggate/dirdat/rt
    
  8. Start the Replicat process as shown in the following code:

    GGSCI> START REPLICAT <REPLICAT>
    

How it works…

In the preceding procedure we first create a Checkpoint table in the target database. As the name suggests, the Replicat process uses this table to maintain its checkpoints. In case the Replicat process crashes and it is restarted, it can read this Checkpoint table and start applying the changes from the point where it left.

Once you have added a Checkpoint table, you need to create a parameter file for the Replicat process. Once the process parameter file is created, it is then added to the GoldenGate instance. At this point, we are ready to start the Replicat process and apply the changes to the target database. You should see an output similar to the following:

GGSCI (stdby1-ol6-112.localdomain) 10> start replicat RGGTEST1
Sending START request to MANAGER ...
REPLICAT RGGTEST1 starting

You can check the status of the Replicat process using the following command:

GGSCI (stdby1-ol6-112.localdomain) 10> status replicat RGGTEST1
REPLICAT RGGTEST1: RUNNING

There's more…

Following are the common parameters that are specified in the replicat configuration:

  • DISCARDFILE: This parameter is used to specify the name of the discard file. If the Replicat process is unable to apply any changes to the target database due to any errors, it writes the record to the discard file.

  • EOFDELAY secs: This parameter controls how often GoldenGate should check the local trail file for new data.

  • REPORTCOUNT: This parameter controls how often the Replicat process writes its progress to the report file.

  • BATCHSQL: This parameter is used to specify the BATCHSQL mode for replicat.

  • ASSUMETARGETDEFS: This parameter tells the Replicat process to assume that the source and target database object structures are the same.

See also

  • Read Setting up GoldenGate replication between tables with different structures using defgen recipe in Chapter 2, Setting Up GoldenGate Replication

  • Steps to configure a BATCHSQL mode recipe in Chapter 6, Monitoring, Tuning, and Troubleshooting GoldenGate for further information

About the Author
  • Ankur Gupta

    Ankur Gupta is an Oracle Database Consultant based in London. He has a Master's degree in Computer Science. He started his career as an Oracle developer and later on moved into database administration. He has been working with Oracle Technologies for over 11 years in India and the UK. Over the last 6 years, he has worked as an Oracle Consultant with some of the top companies in the UK in the areas of investment banking, retail, telecom and media. He is an Oracle Certified Exadata, GoldenGate Specialist, and OCP 11g DBA. His main areas of interest are Oracle Exadata, GoldenGate, Dataguard, RAC, and Linux. Outside the techie world, he is an avid cook, photographer, and enjoys travelling.

    Browse publications by this author
Latest Reviews (2 reviews total)
Полезная книга, редкая тема
Great Service and good book collection
Oracle Goldengate 11g Complete Cookbook
Unlock this book and the full library FREE for 7 days
Start now