Oracle GoldenGate- Advanced Administration Tasks - I

Exclusive offer: get 50% off this eBook here
Oracle Goldengate 11g Complete Cookbook

Oracle Goldengate 11g Complete Cookbook — Save 50%

Over 60 simple and easy-to-follow recipes to perform various Goldengate administration tasks such as installing, configuring, and maintaining Goldengate replication with this book and ebook

$41.99    $21.00
by Ankur Gupta | September 2013 | Cookbooks Enterprise Articles Oracle

In this article by Ankur Gupta, author of the book Oracle GoldenGate 11g, we have been provided with a ready, step-by-step approach to perform various GoldenGate Administration tasks. With these recipes in hand, we will easily be able to implement and manage Oracle GoldenGate in an efficient way.

In this article we will cover the following recipes:

  • Upgrading Oracle GoldenGate binaries
  • Table structure changes in GoldenGate environments with similar table definitions
  • Table structure changes in GoldenGate environments with different table definitions
  • Resolving GoldenGate errors using the logdump utility
  • Undoing the applied changes using the reverse utility
  • Creating an Integrated Capture with a downstream database for compressed tables

(For more resources related to this topic, see here.)

Upgrading Oracle GoldenGate binaries

In this recipe you will learn how to upgrade GoldenGate binaries. You will also learn about GoldenGate patches and how to apply them.

Getting ready

For this recipe, we will upgrade the GoldenGate binaries from version 11.2.1.0.1 to 11.2.1.0.3 on the source system, that is prim1-ol6-112 in our case. Both of these binaries are available from the Oracle Edelivery website under the part number V32400-01 and V34339-01 respectively. 11.2.1.0.1 binaries are installed under /u01/app/ggate/112101.

How to do it...

The steps to upgrade the Oracle GoldenGate binaries are:

  1. Make a new directory for 11.2.1.0.3 binaries:

    mkdir /u01/app/ggate/112103

  2. Copy the binaries ZIP file to the server in the new directory.
  3. Unzip the binaries file:

    [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112103 [ggate@prim1-ol6-112 112103]$ unzip V34339-01.zip Archive: V34339-01.zip inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar inflating: Oracle_GoldenGate_11.2.1.0.3_README.doc inflating: Oracle GoldenGate_11.2.1.0.3_README.txt inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.3.pdf

  4. Install the new binaries in /u01/app/ggate/112103:

    [ggate@prim1-ol6-112 112103]$ tar -pxvf fbo_ggs_Linux_x64_ora11g_64bit.tar

  5. Stop the processes in the existing installation:

    [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112101 [ggate@prim1-ol6-112 112101]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> stop * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed.

  6. Stop the manager process:

    GGSCI (prim1-ol6-112.localdomain) 2> STOP MGR
    Manager process is required by other GGS processes.
    Are you sure you want to stop it (y/n)? y
    Sending STOP request to MANAGER ...
    Request processed.
    Manager stopped.

  7. Copy the subdirectories to the new binaries:
    [ggate@prim1-ol6-112 112101]$ cp -R dirprm /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirrpt /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirchk /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R BR /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirpcs /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirdef /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirout /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirdat /u01/app/ggate/112103/
    [ggate@prim1-ol6-112 112101]$ cp -R dirtmp /u01/app/ggate/112103/
  8. Modify any parameter files under dirprm if you have hardcoded old binaries path in them.
  9. Edit the ggate user profile and update the value of the GoldenGate binaries home:
    vi .profile
    export GG_HOME=/u01/app/ggate/112103
    
  10. Start the manager process from the new binaries:

    [ggate@prim1-ol6-112 ~]$ cd /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112103]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> START MGR Manager started.

  11. Start the processes:

    GGSCI (prim1-ol6-112.localdomain) 18> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting

How it works...

The method to upgrade the GoldenGate binaries is quite straightforward. As seen in the preceding section, you need to download and install the binaries on the server in a new directory. After this, you would stop the all GoldenGate processes that are running from the existing binaries. Then you would copy all the important GoldenGate directories with parameter files, trail files, report files, checkpoint files, and recovery files to the new binaries. If your trail files are kept on a separate filesystem which is linked to the dirdat directory using a softlink, then you would just need to create a new softlink under the new GoldenGate binaries home. Once all the files are copied, you would need to modify the parameter files if you have the path of the existing binaries hardcoded in them. The same would also need to be done in the OS profile of the ggate user. After this, you just start the manager process and rest of the processes from the new home.

GoldenGate patches are all delivered as full binaries sets. This makes the procedure to patch the binaries exactly the same as performing major release upgrades.

Table structure changes in GoldenGate environments with similar table definitions

Almost all of the applications systems in IT undergo some change over a period of time. This change might include a fix of an identified bug, an enhancement or some configuration change required due to change in any other part of the system. The data that you would replicate using GoldenGate will most likely be part of some application schema. These schemas, just like the application software, sometimes require some changes which are driven by the application vendor. If you are replicating DDL along with DML in your environment then these schema changes will most likely be replicated by GoldenGate itself. However, if you are only replicating only DML and there are any DDL changes in the schema particularly around the tables that you are replicating, then these will affect the replication and might even break it.

In this recipe, you will learn how to update the GoldenGate configuration to accommodate the schema changes that are done to the source system. This recipe assumes that the definitions of the tables that are replicated are similar in both the source and target databases.

Getting ready

For this recipe we are making the following assumptions:

  1. GoldenGate is set up to replicate only DML changes between the source and target environments.
  2. The application will be stopped for making schema changes in the source environment.
  3. The table structures in the source and target database are similar.
  4. The replication is configured for all objects owned by a SCOTT user using a SCOTT.* clause.
  5. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database.

The schema changes performed in this recipe are as follows:

  1. Add a new column called DOB (DATE) to the EMP table.
  2. Modify the DNAME column in the DEPT table to VARCHAR(20).
  3. Add a new table called ITEMS to the SCOTT schema:

    ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20)

  4. Add a new table called SALES to the SCOTT schema:

    SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO)

  5. Load the values for the DOB column in the EMP table.
  6. Load a few records in the ITEMS table.

How to do it…

Here are the steps that you can follow to implement the preceding schema changes in the source environment:

  1. Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database.
  2. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract.
  3. Check the latest timestamp read by the Extract and Datapump processes and ensure it is the current timestamp:

    GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-25 22:35:06 Seqno 350, RBA 11778560 SCN 0.11806849 (11806849) EXTRACT PGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-25 22:35:05.000000 RBA 7631

  4. Stop the Extract and Datapump processes in the source environment:

    GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed.

  5. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3:

    GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-25 22:25 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000061 2013-03-25 22:37:04.950188 RBA 10039

  6. Stop the Replicat process in the target environment:

    GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed.

  7. Apply the schema changes to the source database:

    SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete.

  8. Apply the schema changes to the target database:

    SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete.

  9. Add supplemental logging for the newly added tables:

    GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@ DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES.

  10. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch:

    GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered.

  11. Start the Extract and Datapump in the source environment:

    GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting

  12. Start the Replicat process in the target environment:

    GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting

How it works...

The preceding steps cover a high level procedure that you can follow to modify the structure of the replicated tables in your GoldenGate configuration. Before you start to alter any processes or parameter file, you need to ensure that the applications are stopped and no user sessions in the database are modifying the data in the tables that you are replicating. Once the application is stopped, we check that all the redo data has been processed by GoldenGate processes and then stop. At this point we run the scripts that need to be run to make DDL changes to the database. This step needs to be run on both the source and target database as we will not be replicating these changes using GoldenGate. Once this is done, we alter the GoldenGate processes to start from the current time and start them.

There's more...

Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in such cases where the environment does not satisfy these conditions:

Specific tables defined in GoldenGate parameter files

Unlike the earlier example, where the tables are defined in the parameter files using a schema qualifier for example SCOTT.*, if you have individual tables defined in the GoldenGateparameterfiles, you would need to modify the GoldenGate parameter files to add these newly created tables to include them in replication.

Individual table permissions granted to the GoldenGate Admin user

If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment.

Supplemental logging for modified tables without any keys

If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will have to be modified when the structure of the underlying table is modified.

Supplemental log groups with all columns for modified tables

In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.

Table structure changes in GoldenGate environments with different table definitions

In this recipe you will learn how to perform table structure changes in a replication environment where the table structures in the source and target environments are not similar.

Getting ready

For this recipe we are making the following assumptions:

  1. GoldenGate is set up to replicate only DML changes between the source and target environments.
  2. The application will be stopped for making schema changes in the source environment.
  3. The table structures in the source and target databases are not similar.
  4. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database.
  5. The definition file was generated for the source schema and is configured in the replicat parameter file.

The schema changes performed in this recipe are as follows:

  1. Add a new column called DOB (DATE) to the EMP table.
  2. Modify the DNAME column in the DEPT table to VARCHAR(20).
  3. Add a new table called ITEMS to the SCOTT schema:

    ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20)

  4. Add a new table called SALES to the SCOTT schema:

    SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO)

  5. Load the values for the DOB column in the EMP table.
  6. Load a few records in the ITEMS table.

How to do it...

Here are the steps that you can follow to implement the previous schema changes in the source environment:

  1. Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database.
  2. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract.
  3. Check the latest timestamp read by the Extract and Datapump process, and ensure it is the current timestamp:

    GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-28 10:16:06 Seqno 352, RBA 12574320 SCN 0.11973456 (11973456) EXTRACT PGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-28 10:15:43.000000 RBA 8450

  4. Stop the Extract and Datapump processes in the source environment:

    GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed.

  5. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3:

    GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-28 10:15 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000062 2013-03-28 10:15:04.950188 RBA 10039

  6. Stop the Replicat process in the target environment:

    GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed.

  7. Apply the schema changes to the source database:

    SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete.

  8. Apply the schema changes to the target database:

    SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete.

  9. Add supplemental logging for the newly added tables:

    GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES.

  10. Update the parameter file for generating definitions as follows:

    vi $GG_HOME/dirprm/defs.prm DEFSFILE ./dirdef/defs.def USERID ggate_admin@dboratest, PASSWORD XXXX TABLE SCOTT.EMP; TABLE SCOTT.DEPT; TABLE SCOTT.BONUS; TABLE SCOTT.DUMMY; TABLE SCOTT.SALGRADE; TABLE SCOTT.ITEMS; TABLE SCOTT.SALES;

  11. Generate the definitions in the source environment:

    ./defgen paramfile ./dirprm/defs.prm

  12. Push the definitions file to the target server using scp:

    scp ./dirdef/defs.def stdby1-ol6-112:/u01/app/ggate/dirdef/

  13. Edit the Extract and Datapump process parameter to include the newly created tables if you have specified individual table names in them.
  14. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch:

    GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered.

  15. Start the Extract and Datapump in the source environment:

    GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting

  16. Edit the Replicat process parameter file to include the tables:

    ./ggsci EDIT PARAMS RGGTEST1 REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc,append,MEGABYTES 500 SOURCEDEFS ./dirdef/defs.def MAP SCOTT.BONUS, TARGET SCOTT.BONUS; MAP SCOTT.SALGRADE, TARGET SCOTT.SALGRADE; MAP SCOTT.DEPT, TARGET SCOTT.DEPT; MAP SCOTT.DUMMY, TARGET SCOTT.DUMMY; MAP SCOTT.EMP, TARGET SCOTT.EMP; MAP SCOTT.EMP,TARGET SCOTT.EMP_DIFFCOL_ORDER; MAP SCOTT.EMP, TARGET SCOTT.EMP_EXTRACOL, COLMAP(USEDEFAULTS,
    LAST_UPDATE_TIME = @DATENOW ());
    MAP SCOTT.SALES, TARGET SCOTT.SALES; MAP SCOTT.ITEMS, TARGET SCOTT.ITEMS;

  17. Start the Replicat process in the target environment:

    GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting

How it works...

You can follow the previously mentioned procedure to apply any DDL changes to the tables in the source database. This procedure is valid for environments where existing table structures between the source and the target databases are not similar.

The key things to note in this method are:

  1. The changes should only be made when all the changes extracted by GoldenGate are applied to the target database, and the replication processes are stopped.
  2. Once the DDL changes have been performed in the source database, the definitions file needs to be regenerated.
  3. The changes that you are making to the table structures needs to be performed on both sides.

There's more…

Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in cases where the environment does not satisfy these conditions:

Individual table permissions granted to the GoldenGate Admin user

If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment.

Supplemental logging for modified tables without any keys

If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will need to be modified when the structure of the underlying table is modified.

Supplemental log groups with all columns for modified tables

In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.

Oracle Goldengate 11g Complete Cookbook Over 60 simple and easy-to-follow recipes to perform various Goldengate administration tasks such as installing, configuring, and maintaining Goldengate replication with this book and ebook
Published: September 2013
eBook Price: $41.99
Book Price: $69.99
See more
Select your format and quantity:

Resolving GoldenGate errors using the logdump utility

So far we have learned in previous recipes how to use various GoldenGate commands to manage GoldenGate processes and scan the logs. If there is an issue in the replication and the processes have abended, you can always restart them. But if the processes abended due to some data error, then a restart will not resolve the issue and you would then need to drill down further to find the exact cause of the data error. GoldenGate uses a proprietary format to write data to the trail files. Because of this you cannot just open the trail file in an editor and see its contents.

In this recipe we will learn how to use a tool called logdump using which you can view the contents of a trail file. If the replication abends due to a particular record, you can evaluate the values of the failing record and then understand why it is failing. This will help you to decide the further course of action to fix the replication.

Getting ready

In this recipe we will go through a simple replication setup between the tables of a SCOTT schema which is failing due to a missing row in the target table. GoldenGate extract has mined a DELETE record from the source environment, but when the replicat is trying to apply this to the target environment, it is failing. Because of this missing row in the target environment the replication is unable to continue further.

How to do it...

The steps to resolve this issue and resume the replication are as follows:

  1. The last statements that ran in the source environment were:

    SQL> DELETE SCOTT.EMP WHERE EMPNO=7369; 1 row deleted. SQL> INSERT INTO SCOTT.EMP VALUES (8800,'ROGER','ANALYST',7934,'23- DEC-1972',2300,0,10); 1 row created. SQL> COMMIT;

  2. The replicat is abended in the target environment. Let's verify its status:

    GGSCI (stdby1-ol6-112.localdomain) 2> status * REPLICAT RGGTEST1: ABENDED

  3. The error from the replicat report file is:

    2013-05-01 00:41:11 WARNING OGG-01431 Aborted grouped transaction on 'SCOTT.EMP', Mapping error. 2013-05-01 00:41:11 WARNING OGG-01003 Repositioning to rba 7874 in seqno 65. 2013-05-01 00:41:11 WARNING OGG-01151 Error mapping from SCOTT. EMP to SCOTT.EMP. 2013-05-01 00:41:11 WARNING OGG-01003 Repositioning to rba 7874 in seqno 65. Process Abending : 2013-04-30 22:57:07

  4. Further details of the error can be checked from the discard file as follows:

    Oracle GoldenGate Delivery for Oracle process started, group RGGTEST1 discard file opened: 2013-04-30 22:59:57 Key column ENAME (1) is missing from delete on table SCOTT.EMP Key column JOB (2) is missing from delete on table SCOTT.EMP Key column MGR (3) is missing from delete on table SCOTT.EMP Key column HIREDATE (4) is missing from delete on table SCOTT.EMP Key column SAL (5) is missing from delete on table SCOTT.EMP Key column COMM (6) is missing from delete on table SCOTT.EMP Key column DEPTNO (7) is missing from delete on table SCOTT.EMP Missing 7 key columns in delete for table SCOTT.EMP. Current time: 2013-04-30 23:24:10 Discarded record from action ABEND on error 0 Aborting transaction on ./dirdat/rt beginning at seqno 63 rba 315017 error at seqno 63 rba 315017 Problem replicating SCOTT.EMP to SCOTT.EMP Mapping problem with delete record (target format)... * EMPNO = 7369 * Process Abending : 2013-04-30 23:24:10

  5. Let's check whether this record exists in the target database or not:

    SQL> SELECT * FROM SCOTT.EMP WHERE EMPNO=7369; no rows selected

  6. The current position of the Replicat process is:

    GGSCI (stdby1-ol6-112.localdomain) 2> INFO RGGTEST1 REPLICAT RGGTEST1 Last Started 2013-05-01 00:39 Status ABENDED Checkpoint Lag 00:00:00 (updated 00:00:23 ago) Log Read Checkpoint File ./dirdat/rt000065 2013-05-01 00:40:17.764177 RBA 7874

  7. Start the logdump utility:

    [ggate@stdby1-ol6-112 ggate]$ cd /u01/app/ggate [ggate@stdby1-ol6-112 ggate]$ ./logdump

  8. Open the trail file sequence 65:

    Logdump 10 >open ./dirdat/rt000065 Current LogTrail is /u01/app/ggate/dirdat/rt000063

  9. Set up the logdump utility to show the additional useful information:

    Logdump 11 >ghdr on Logdump 12 >detail on Logdump 13 >detail data

  10. Position the logdump to the position of the failing record in the trail file:

    Logdump 16 >pos 7874 Reading forward from RBA 7874

  11. Now list the failing record:

    Logdump 17 >n _________________________________________________________ Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 14 (x000e) IO Time : 2013/05/01 00:41:06.681.212 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x00) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 356 AuditPos : 23233552 Continued : N (x00) RecCount : 1 (x01) 2013/05/01 00:41:06.681.212 Delete Len 14 RBA 7874 Name: SCOTT.EMP Before Image: Partition 4 G b 0000 000a 0000 0000 0000 0000 1cc9 | .............. Column 0 (x0000), Len 10 (x000a)

  12. We want to skip this record and find the position of the next one:

    Logdump 164 >n _________________________________________________________ Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: A (x41) RecLength : 123 (x007b) IO Time : 2013/05/01 00:41:06.681.212 IOType : 5 (x05) OrigNode : 255 (xff) TransInd : . (x02) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 356 AuditPos : 23234576 Continued : N (x00) RecCount : 1 (x01) 2013/05/01 00:41:06.681.212 Insert Len 123 RBA 8002 Name: SCOTT.EMP After Image: Partition 4 G e 0000 000a 0000 0000 0000 0000 2260 0001 0009 0000 | ............"`...... 0005 524f 4745 5200 0200 0b00 0000 0741 4e41 4c59 | ..ROGER........ANALY 5354 0003 000a 0000 0000 0000 0000 1efe 0004 0015 | ST.................. 0000 3139 3732 2d31 322d 3233 3a30 303a 3030 3a30 | ..1972-12-23:00:00:0 3000 0500 0a00 0000 0000 0000 0382 7000 0600 0a00 | 0.............p..... 0000 0000 0000 0000 0000 0700 0a00 0000 0000 0000 | .................... 0000 0a | ... Column 0 (x0000), Len 10 (x000a) Column 1 (x0001), Len 9 (x0009) Column 2 (x0002), Len 11 (x000b) Column 3 (x0003), Len 10 (x000a) Column 4 (x0004), Len 21 (x0015) Column 5 (x0005), Len 10 (x000a) Column 6 (x0006), Len 10 (x000a) Column 7 (x0007), Len 10 (x000a)

  13. Position the replicat to the RBA of this record that is 8002:

    GGSCI (stdby1-ol6-112.localdomain) 1> alter replicat rggtest1, extrba 8002 REPLICAT altered.

  14. Start the replicat:

    GGSCI (stdby1-ol6-112.localdomain) 2> start replicat rggtest1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting

  15. Verify the newly inserted record in the target environment:

    SQL> SELECT * FROM EMP WHERE EMPNO=8800; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ------ ---- --------- ------ ---- ----------- 8800 ROGER ANALYST 7934 23-DEC-72 2300 0 10

How it works...

This is a very simple example of a typical data mismatch issue in a GoldenGate environment. A record was deleted from the EMP table in the source database and this record was extracted by the GoldenGate processes. When the Replicat process tried to apply it to the target environment, it failed as the record does not exist in the target environment.

While troubleshooting such issues, the first thing that one should check is why is the data different in the target database. There could be something else modifying the data in the target database. Once you have identified the source of the second modification and taken care of that, you need to fix this data anamoly and continue the replication. For this we use a GoldenGate tool called logdump. Using logdump you can scan through the trail files to see their contents and you can move through various records in the trail file.

In the earlier example, we first check the status of the Replicat process and verify that it is abended. Since the process is abended, we check the reason from the report/discard file. From steps 3 and 4, it is quite clear that the process abended as it could not apply the delete record. Next we note the current sequence number and RBA of the Replicat process. We will then use logdump to dump the contents of this trail file and this RBA. In steps 7 and 8, we start the logdump and open this trail file. By default, logdump does not display the header record when you view a data record. We instruct logdump in step 9 using ghdr on and detail the data command to output the metadata information as well. After this we move the current pointer in the trail file to where the Replicat process was failing and then we dump the next record using the n command. This is the failing record and we want to skip this, so we run the n command again to move to the next record. You can see in step 12 that this is the Insert record that was executed in the source database after this delete. This is the point from which we want the Replicat process to continue. So we alter the Replicat process to start from this RBA and resume it. Once the process is resumed it continues working fine. In step 15, we verify that the Insert record was applied to the target database by checking the Insert record from the database.

There's more...

Logdump is a very handy tool when it comes to troubleshooting various data issues in a GoldenGate replication. There are a few other important commands in the logdump which we will discuss here:

Count

With the count command you can count the number of records in a GoldenGate trail file:

Logdump 166 >count LogTrail /u01/app/ggate/dirdat/rt000065 has 33 records Total Data Bytes 5363 Avg Bytes/Record 162 Delete 1 Insert 1 FieldComp 28 RestartOK 2 Others 1 Before Images 1 After Images 31 Average of 6 Transactions Bytes/Trans ..... 1157 Records/Trans ... 5 Files/Trans ..... 1

Scan for timestamp

Scan for timestamp is useful when you want to find a record that was modified at a particular timestamp in a trail file:

Logdump 167 >sfts 2013-05-01 00:33:00 Scan for timestamp >= 2013/04/30 23:33:00.000.000 GMT 2013/05/01 00:35:14.710.374 FieldComp Len 121 RBA 1920 Name: SCOTT.EMP After Image: Partition 4 G b 0000 000a 0000 0000 0000 0000 1cc9 0001 0009 0000 | .................... 0005 534d 4954 4800 0200 0900 0000 0543 4c45 524b | ..SMITH........CLERK 0003 000a 0000 0000 0000 0000 1ede 0004 0015 0000 | .................... 3139 3830 2d31 322d 3137 3a30 303a 3030 3a30 3000 | 1980-12-17:00:00:00. 0500 0a00 0000 0000 0000 0138 8000 0600 0aff ff00 | ...........8........ 0000 0000 0000 0000 0700 0a00 0000 0000 0000 0000 | .................... 14 | .

Filter on SCN

If you know the SCN from the source database, you can scan the trail file for the SCN using the logdump filter on csn command as follows:

Logdump 210 >filter on csn 11910911 Logdump 211 >n 2013/05/01 00:11:46.633.724 FileHeader Len 1786 RBA 0 Name: *FileHeader* 3000 047d 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0..}0...GG..TL..1... 0003 3200 0004 2000 0000 3300 0008 02f2 01cf 94d4 | ..2... ...3......... 4bfc 3400 0035 0033 7572 693a 7072 696d 312d 6f6c | K.4..5.3uri:prim1-ol 362d 3131 323a 6c6f 6361 6c64 6f6d 6169 6e3a 3a75 | 6-112:localdomain::u 3031 3a61 7070 3a67 6761 7465 3a31 3132 3130 3135 | 01:app:ggate:1121015 0002 bb35 0000 3500 3375 7269 3a70 7269 6d31 2d6f | ...5..5.3uri:prim1-o 6c36 2d31 3132 3a6c 6f63 616c 646f 6d61 696e 3a3a | l6-112:localdomain::

Undoing the applied changes using the reverse utility

Part of the job of setting a GoldenGate replication consists of defining what to replicate. Once these rules are in place, GoldenGate processes scan the redo data for any records for the replicated tables. Most of the time, these changes are intended and generated by some application. However, sometimes some data updates are performed in the source environment by mistake and you would ideally not want them to be replicated to the target environment. In such scenarios, the administrator might want to roll back the changes performed. You can perform such tasks using the latest database technologies, for example, flashback. However, it is very difficult to roll back only a set of transactions from the database especially when you have applied complex filtering in the GoldenGate configuration. In such cases, you need another approach which will reverse the changes that were applied using GoldenGate. GoldenGate provides a reverse utility using which you can undo the changes in the target database. In this recipe we will go through how to configure and use this utility. We will do this by following a simple scenario in which a few statements are applied to the target database and then reversed.

Getting ready

For this recipe we will refer to the setup done in Setting up a simple GoldenGate replication configuration between two single node databases in Setting up GoldenGate Replication. We will perform a few simple INSERT/UPDATE/DELETE operations on some tables owned by the SCOTT user. Once these changes are applied to the target database, we will verify them and then reverse those using the reverse utility.

How to do it...

In order to demonstrate reversing the changes applied by GoldenGate, let's first perform some changes in the source environment:

  1. List the records in the EMP and DEPT table in the target database:

    SQL> SELECT * FROM EMP; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---- 7934 MILLER CLERK 7782 23-JAN-82 1300 10 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7839 KING PRESIDENT 17-NOV-81 5000 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7369 SMITH CLERK 7902 17-DEC-80 800 20 14 rows selected. SQL> SELECT * FROM DEPT; DEPTNO DNAME LOC ---------- -------------- ------------- 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BOSTON Run the following statements in the source database SQL> SELECT TO_CHAR(SYSDATE,'DD-MON-YYYY HH24:MI:SS') FROM DUAL; TO_CHAR(SYSDATE,'DD-MON-YYYYH ----------------------------- 01-MAY-2013 21:22:36 SQL> DELETE EMP; 14 rows deleted. SQL> INSERT INTO SCOTT.EMP VALUES (8800,'ROGER','ANALYST',7934,'23- DEC-1972',2300,0,10); 1 row created. SQL> INSERT INTO DEPT VALUES (50,'MARKETING','SAN JOSE'); 1 row created. SQL> COMMIT; Commit complete. SQL> SELECT TO_CHAR(SYSDATE,'DD-MON-YYYY HH24:MI:SS') FROM DUAL; TO_CHAR(SYSDATE,'DD-MON-YYYYH ----------------------------- 01-MAY-2013 21:23:52

  2. Verify the current data in the EMP and DEPT table in the target database:

    SQL> SELECT * FROM EMP; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ------ ------- --- ---- -------- ---- ---- ------ 8800 ROGER ANALYST 7934 23-DEC-72 2300 0 10 SQL> SELECT * FROM DEPT; DEPTNO DNAME LOC ------ ---------- -------------- 50 MARKETING SAN JOSE 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BOSTON

Now, we will run through the following steps to reverse the preceding changes:

  1. Stop the Extract and Datapump processes in the source environment:

    GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed.

  2. Stop the Replicat process in the target environment:

    GGSCI (stdby1-ol6-112.localdomain) 4> STOP RGGTEST1
    Sending STOP request to REPLICAT RGGTEST1 ...
    Request processed.

  3. Set up an Extract in the source environment just to read the records between the timestamps that we noted in step 1:

    GGSCI (prim1-ol6-112.localdomain) 1> EDIT PARAMS EGGREVERSE
    EXTRACT EGGREVERSE
    USERID GGATE_ADMIN@DBORATEST, PASSWORD GGATE_ADMIN
    NOCOMPRESSDELETES
    GETUPDATEBEFORES
    END 2013-05-01 21:23:52
    RMTHOST stdby-ol6-112 , MGRPORT 8809
    RMTTRAIL /u01/app/ggate/dirdat/reverse/rt
    TABLE SCOTT.*;

  4. Add the Extract to the source GoldenGate config:

    GGSCI (prim1-ol6-112.localdomain) 1> ADD EXTRACT EGGREV, TRANLOG,
    BEGIN 2013-05-01 21:22:36
    EXTRACT added.

  5. Add the remote trail for the Extract:

    GGSCI (prim1-ol6-112.localdomain) 2> ADD RMTTRAIL /u01/app/ggate/
    dirdat/reverse/rt, EXTRACT EGGREV
    RMTTRAIL added.

  6. Start the Extract process, it will only extract the records for the timestamps noted in step 1:

    GGSCI (prim1-ol6-112.localdomain) 3> start eggrev
    Sending START request to MANAGER ...
    EXTRACT EGGREV starting

  7. The Extract process in step 8, created a trail file on the target system.
  8. We will use the reverse utility to generate a trail file with reverse records:

    [ggate@stdby1-ol6-112 ggate]$ ./reverse ./dirdat/reverse/rt000000 ./dirdat/reverse/st000000 Oracle GoldenGate Dynamic Rollback Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230 Linux, x64, 64bit (optimized) on Apr 23 2012 04:59:01 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. *Warning* Source file contained Deletes which maybe compressed Reversed ./dirdat/reverse/rt000000 to /u01/app/ggate/dirdat/ reverse/st000000 Total Data Bytes 1890 Avg Bytes/Record 118 Delete 14 Insert 2 Before Images 14 After Images 2

  9. Now we will create a Replicat process to apply the records generated in the ./dirdat/reverse/st000000 trail file:

    GGSCI (stdby1-ol6-112.localdomain) 1> EDIT PARAMS RGGREV
    REPLICAT RGGREV
    USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN
    END 2013-05-01 21:23:52
    DISCARDFILE /u01/app/ggate/dirrpt/RGGREV.dsc,append,MEGABYTES 500
    ASSUMETARGETDEFS
    MAP SCOTT.*, TARGET SCOTT.*;

  10. Add the replicat to the target GoldenGate configuration:

    GGSCI (stdby1-ol6-112.localdomain) 3> ADD REPLICAT RGGREV,
    EXTTRAIL
    ./dirdat/reverse/st, CHECKPOINTTABLE GGATE_ADMIN.
    CHECKPOINT
    REPLICAT added.

  11. Start the replicat to apply the changes generated by the reverse utility:

    GGSCI (stdby1-ol6-112.localdomain) 4> START REPLICAT RGGREV
    Sending START request to MANAGER ...
    REPLICAT RGGREV starting

  12. Verify that the changes have been reversed in the target database and the data is as it was before the changes made in step 1:

    SQL> SELECT * FROM EMP;
    EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
    ---------- ---------- --------- ---------- --------- ----
    7934 MILLER CLERK 7782 23-JAN-82 1300 10
    7902 FORD ANALYST 7566 03-DEC-81 3000 20
    7900 JAMES CLERK 7698 03-DEC-81 950 30
    7876 ADAMS CLERK 7788 12-JAN-83 1100 20
    7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
    7839 KING PRESIDENT 17-NOV-81 5000 10
    7788 SCOTT ANALYST 7566 09-DEC-82 3000 20
    7782 CLARK MANAGER 7839 09-JUN-81 2450 10
    7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
    7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
    7566 JONES MANAGER 7839 02-APR-81 2975 20
    7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
    7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
    7369 SMITH CLERK 7902 17-DEC-80 800 20
    14 rows selected.
    SQL> SELECT * FROM DEPT;
    DEPTNO DNAME LOC
    ---------- -------------- -------------
    10 ACCOUNTING NEW YORK
    20 RESEARCH DALLAS
    30 SALES CHICAGO
    40 OPERATIONS BOSTON

How it works...

The reverse utility is quite useful to roll back accidental changes that might have happened in an environment. You would need to determine a time slot during which the unwanted changes happened. Once you have determined the timeslot you can use GoldenGate to extract the changes occurred, reverse them, and apply the deltas to bring the data back to the state where it was.

In this recipe we first verify the current state of the data in the source and the target database, and then make some data modifications. We also capture the timestamps before and after making these changes. We then stop the current GoldenGate processes and create a new extract called EGGREV specifically to extract the changes for that timeslot. This extract also transfers the data to the remote trail file. So once the extract has stopped, we get a new trail file in the target system. This trail file is then fed into the reverse utility in step 8. The reverse utility reads this trail file and generates opposite records. These records are written to a new trail file. In steps 9 and 10, we create a new Replicat process called RGGREV using which the changes in this trail file are applied to the target database. Once the Replicat process has stopped we verify the data in the target database.

There's more...

The reverse utility reverses the operations by:

  • Converting INSERTs to DELETEs
  • Converting DELETEs to INSERTs
  • Running UPDATEs with old values
  • Reversing the sequence in which the statements were run

In order to enable the reverse utility to complete the preceding operations successfully, it is very crucial that you run the Extract process for capturing changes for reversing operations with the NOCOMPRESSDELETES and GETUPDATEBEFORES options.

There are a few data types for which reversing is not supported in Oracle. This is because GoldenGate does not generate/capture the before images of these data types. The unsupported data types for the reverse utility are:

  • CLOB
  • BLOB
  • NCLOB
  • LONG
  • LONG RAW
  • XMLType
  • UDT
  • Nested Tables
  • VARRAY

Creating an Integrated Capture with a downstream database for compressed tables

Logical replication between two databases is a complex task. The level of complexity is quite dependent on the data types of the underlying data that you are replicating. When you add additional features offered by an Oracle database, for example compression, the complexity increases further. Until quite recently, Oracle GoldenGate did not support replicating data from a source database where the data was stored in compressed tables. With the launch of Oracle GoldenGate 11gR2, Oracle has added a new capture mode called Integrated Capture using which you can extract data from compressed tables. The Integrated Capture uses a logminer dictionary which you can either place in the source database itself, or in a separate database called the downstream database. In this recipe we will see how we can create an Integrated Capture extract to read data from compressed tables with a logminer dictionary created in a downstream database. The downstream mining will be set up in real-time mode so the downstream database will be configured with standby redo logs.

Getting ready

For this recipe we will refer to the setup done in Setting up a simple GoldenGate replication configuration between two single node databases in recipe 2, Setting up GoldenGate Replication Process. For this recipe the EMP and DEPT tables in the SCOTT schema are compressed. We also have an additional database called downstrm which will hold the logminer dictionary. The downstrm database is a shell database in which archiving is enabled. We will perform some additional setup for GoldenGate in this database. The recipe does not cover the steps to create the shell database but does explain the additional steps required to set up the downstream mining and logminer dictionary in the downstream database. For this recipe, the downstream database is created on the same host as the source database. In a real world scenario, you would create the downstream database on a different host.

How to do it...

The steps to set up an Integrated Capture with a downstream database are as follows:

  1. Compress the EMP and DEPT tables in the source database:

    ALTER TABLE EMP COMPRESS;
    ALTER TABLE DEPT COMPRESS;

  2. Set up a tns entry for the downstrm database:

    DOWNSTRM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = prim1-ol6-112)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = downstrm)))

  3. Create standby redo log files in the downstrm database:

    alter database add standby logfile group 4
    ('/u01/app/oracle/
    oradata/downstrm/std_redo04.log') size 50m;
    alter database add standby logfile group 5
    ('/u01/app/oracle/
    oradata/downstrm/std_redo05.log') size 50m;
    alter database add standby logfile group 6
    ('/u01/app/oracle/
    oradata/downstrm/std_redo06.log') size 50m;
    alter database add standby logfile group 7
    ('/u01/app/oracle/
    oradata/downstrm/std_redo07.log') size 50m;

  4. Configure log shipping from the source database to the downstream mining database. Login to the dboratest database and run the following commands:

    ALTER SYSTEM SET log_archive_dest_2='SERVICE=DOWNSTRM ASYNC NOREGISTER
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=downstrm' SCOPE=BOTH;

    ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dboratest,downstrm)';

  5. Configure the log archive config in the downstrm database:

    ALTER SYSTEM SET log_archive_config='DG_
    CONFIG=(downstrm,dboratest)' SCOPE=MEMORY;

  6. Configure an archive destination for standby log files in the downstrm database:

    ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='LOCATION=/u01/app/oracle/
    oradata/downstrm/stdby_archive VALID_FOR=(STANDBY_LOGFILE,PRIMARY_
    ROLE)';

  7. Set up a GoldenGate user in the downstream database for mining:

    CREATE USER GGATE_ADMIN identified by GGATE_ADMIN;
    GRANT CREATE SESSION, ALTER SESSION to GGATE_ADMIN;
    GRANT ALTER SYSTEM TO GGATE_ADMIN;
    GRANT CONNECT, RESOURCE to GGATE_ADMIN;
    GRANT SELECT ANY DICTIONARY to GGATE_ADMIN;
    GRANT FLASHBACK ANY TABLE to GGATE_ADMIN;
    GRANT SELECT ANY TABLE TO GGATE_ADMIN;
    GRANT SELECT ON DBA_CLUSTERS TO GGATE_ADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO GGATE_ADMIN;
    GRANT SELECT ANY TRANSACTION To GGATE_ADMIN;
    EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGATE_ADMIN');
    GRANT SELECT ON SYS.V_$DATABASE TO GGATE_ADMIN;

  8. Next we set up an Integrated Capture in the DOWNSTRM database:

    GGSCI (prim1-ol6-112.localdomain) 1> DBLOGIN USERID GGATE_ADMIN@
    DBORATEST, PASSWORD GGATE_ADMIN

    Successfully logged into database.
    GGSCI (prim1-ol6-112.localdomain) 2> miningDBLOGIN USERID GGATE_
    ADMIN@DBORATEST, PASSWORD GGATE_ADMIN

    Successfully logged into mining database.

    GGSCI (prim1-ol6-112.localdomain) 3> REGISTER EXTRACT EGGINT

    DATABASE
    2013-05-02 15:59:02 INFO OGG-02003 Extract EGGINT
    successfully registered with database at SCN 12037817.

  9. Create a parameter file for Integrated Capture EGGINT:

    EXTRACT EGGINT
    USERID GGATE_ADMIN@DBORATEST, PASSWORD GGATE_ADMIN
    TRANLOGOPTIONS MININGUSER GGATE_ADMIN@DOWNSTRM MININGPASSWORD
    GGATE_ADMIN
    TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
    EXTTRAIL /u01/app/ggate/112101/dirdat/it
    TABLE SCOTT.*;
    RMTHOST stdby1-ol6-112 , MGRPORT 8809
    RMTTRAIL /u01/app/ggate/dirdat/rt

  10. Add the Integrated Capture Extract process to the GoldenGate configuration:

    GGSCI (prim1-ol6-112.localdomain) 1> ADD EXTRACT EGGINT INTEGRATED
    TRANLOG BEGIN NOW
    EXTRACT added.

  11. Add a local and remote trail:

    GGSCI (prim1-ol6-112.localdomain) 82> ADD EXTTRAIL ./dirdat/it,
    EXTRACT EGGINT, MEGABYTES 50
    EXTTRAIL added.
    GGSCI (prim1-ol6-112.localdomain) 83> ADD RMTTRAIL /u01/app/ggate/
    dirdat/rt, EXTRACT EGGINT
    RMTTRAIL added.

  12. We already have a Replicat process (RGGTEST1) in the target environment; let's just reset it to start from the current time:

    GGSCI (stdby1-ol6-112.localdomain) 8> ALTER REPLICAT RGGTEST1,
    BEGIN NOW
    REPLICAT altered.

  13. Now start the Extract process:

    GGSCI (prim1-ol6-112.localdomain) 84> START EXTRACT EGGINT
    Sending START request to MANAGER ...
    EXTRACT EGGINT starting

  14. Start the Replicat process:

    GGSCI (stdby1-ol6-112.localdomain) 9> START REPLICAT RGGTEST1
    Sending START request to MANAGER ...
    REPLICAT RGGTEST1 starting

  15. Now perform some updates on the source database:

    SQL> UPDATE EMP SET COMM=5000 WHERE EMPNO=7876;
    1 row updated.
    SQL> UPDATE DEPT SET DNAME='MARKETING' WHERE DEPTNO=30;
    1 row updated.
    SQL> DELETE EMP WHERE EMPNO=7782;
    1 row deleted.
    SQL> COMMIT;
    Commit complete.

  16. Now verify the changes in the target database:

    SQL> SELECT COMM FROM EMP WHERE EMPNO=7876;
    COMM
    ----------
    5000
    SQL> SELECT DNAME FROM DEPT WHERE DEPTNO=30;
    DNAME
    --------------
    MARKETING
    SQL> SELECT * FROM EMP WHERE EMPNO=7782;
    no rows selected

How it works...

In this recipe we go through the procedure to set up a replication for compressed tables. For this we first compress two tables in the source database. There is an additional database in the source environment called DOWNSTRM. This database is used for setting up the logmining server. The source database DBORATEST sends the redo log data to the mining database. In this example, the mining database is set up to receive the redo log data in real time. For this we create standby redo logs in the mining database.

In this recipe we go through the procedure to set up an Integrated Capture Extract process with the logmining server set up in a downstream database. In step 4, we set up the log shipping from the source database. In steps 5 and 6, we configure the local archiving for the mining database.

GoldenGate requires a mining user in the mining database to be able to read the logminer dictionary. This user is set up in step 7. In the next steps, we set up an Integrated Capture Extract and start it. We then perform some data changes in the EMP and DEPT tables and verify that these are replicated across to the target database.

Summary

Thus we have learned some advanced maintenance tasks such as patching and upgrading GoldenGate binaries that a GoldenGate administrator would need to do at some point. It also covers how to propagate table structure changes in GoldenGate environments. We also discussed some utilities that are available in GoldenGate binaries using which you can view the contents of the extracted records and also undo the applied changes.

Resources for Article :


Further resources on this subject:


Oracle Goldengate 11g Complete Cookbook Over 60 simple and easy-to-follow recipes to perform various Goldengate administration tasks such as installing, configuring, and maintaining Goldengate replication with this book and ebook
Published: September 2013
eBook Price: $41.99
Book Price: $69.99
See more
Select your format and quantity:

About the Author :


Ankur Gupta

Ankur Gupta is an Oracle Database Consultant based in London. He has a Master's degree in Computer Science. He started his career as an Oracle developer and later on moved into database administration. He has been working with Oracle Technologies for over 11 years in India and the UK. Over the last 6 years, he has worked as an Oracle Consultant with some of the top companies in the UK in the areas of investment banking, retail, telecom and media.

He is an Oracle Certified Exadata, GoldenGate Specialist, and OCP 11g DBA. His main areas of interest are Oracle Exadata, GoldenGate, Dataguard, RAC, and Linux.

Outside the techie world, he is an avid cook, photographer, and enjoys travelling.

Books From Packt


Oracle 11g R1/R2 Real Application Clusters Essentials
Oracle 11g R1/R2 Real Application Clusters Essentials

Mastering Oracle Scheduler in Oracle 11g Databases
Mastering Oracle Scheduler in Oracle 11g Databases

Oracle 11g Anti-hacker's Cookbook
Oracle 11g Anti-hacker's Cookbook

 Oracle 11g Streams Implementer's Guide
Oracle 11g Streams Implementer's Guide

Oracle BPM Suite 11g Developer's cookbook
Oracle BPM Suite 11g Developer's cookbook

Oracle SOA Governance 11g Implementation
Oracle SOA Governance 11g Implementation

Oracle Database 11g – Underground Advice for Database Administrators
Oracle Database 11g – Underground Advice for Database Administrators

Oracle Enterprise Manager Grid Control 11g R1: Business Service Management
Oracle Enterprise Manager Grid Control 11g R1: Business Service Management


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
s
m
D
n
4
v
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software