Oracle: RDBMS Log Miner Utility, FRA, and AUM

Exclusive offer: get 50% off this eBook here
Oracle Database 11g – Underground Advice for Database Administrators

Oracle Database 11g – Underground Advice for Database Administrators — Save 50%

A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook

$29.99    $15.00
by April C. Sims | May 2010 | Oracle

In this article by April Sims, author of the book Oracle Database 11g – Underground Advice for Database Administrators, we will learn about Oracle's RDBMS Log Miner Utility, Flash(back) Recovery Area (FRA), and Automatic Undo Management (AUM).

(Read more interesting articles on Oracle Database 11g here.)

Oracle's RDBMS Log Miner Utility

Log Miner can help when questions such as the following come up: What was changed? Who changed it? And in what order?

When unauthorized people change data, they may assume that the record does not retain all changes if that information isn't viewable at the application level. There is a record of all changes that are logged, but it takes time and trouble to find that information.

The tool most often used is the PL/SQL package DBMS_LOGMNR, but the GUI Interface called Log Miner Viewer has been added to the OEM. There are quite a few examples in the Oracle Database Utilities Guide of how to use this utility for both the browser-based and PL/SQL versions. We will concentrate on when and how to find the data to restore.

You already should have a good understanding of the database structures that include the undo and redo logs: undo is generated when an end user starts changing data and redo is generated after the commit. Each is written to their own set of files. While undo and redo are both online (database is open), archived redo is offline and written to a disk.

Archived redo logs are no longer needed for the transactions inside the database because they have been committed and written to disk. Archive logs are still important in order to restore the previously committed transactions in a recovery situation. Making an archive log offl ine allows backup procedures (RMAN, third-party backup software or OS utilities) to manipulate the files at the operating system level.

Recovery is a database process that will:

  • Roll forward changes from redo logs and then rollback statements any end user used the rollback command for.
  • Roll back any uncommitted changes found in the UNDO segments.

There are specific Oracle processes such as LGWR that write the redo to the online logs and then an archiver process (ARC) writes to the archived logs. The only way to ensure every transaction in a database has been logged for recovery purposes is to operate in ARCHIVELOG mode. There are special situations that will call for running in noarchivelog mode. It is assumed that any transactions lost between backups can be recreated. Archived redo logs can be used to restore transactions that occurred between regular backups. From the last exercise, you also have a good understanding of read consistency available from undo segments, which also contribute to redo entries.

The DBMS_LOGMNR package is used to find data in both the undo and redo database structures. It is also useful for analyzing patterns over time for specific tuning needs, schema changes, and forecasting the time for hardware upgrades. With the DBMS_LOGMNR package, you can extract data that populates the V$LOGMNR_CONTENTS view with the actual transactions that have been executed. These entries contain both the REDO and UNDO statements.

You can operate Log Miner on the original database that created the log entries or almost any other Oracle database of a higher version that is running the same character set, database block size, and operating system. This is why it is critical that you protect the online redo, undo, and archive logs—they can be mined for information. Most often a DBA will actually use a different database to do the mining so that it doesn't consume additional resources in a production database. If you use a different database than where the original transactions were created, you will have to rebuild the Log Miner data dictionary (online, offline, or a standalone flat file). The dictionary translates the internal object identifiers and types to table and column names that can be queried, but those object IDs will vary between databases, making the rebuild a requirement.

The Log Miner example task requires several preparatory steps to be completed first, with some additional discussion along the way. Discussion includes archiving, supplemental logging, and Flashback technologies. You won't get to an actual logminer example for quite a few pages. Since logminer has extensive documentation detailing all of the steps for various scenarios, it was decided to only include a lesser known method of using logminer.

Turn on archivelog mode

Before we delve into the mining exercise, we will cover more information about SCNs, as they relate to checkpoints and log switches while turning on archiving for the database. Transactions in a database produce redo entries in the redo log buffer (in memory), but that is always being written to the online redo logs. That occurs according to different triggering events that can happen in the redo stream—a certain amount of data, commits, 3 seconds or 1/3 full redo log buffer. Whether these triggering events occur or not depends on the type and frequency of transactions.

A checkpoint synchronizes modified data blocks in the redo log buffer with the actual data files, keeping the data consistent. In the case of a database crash, this identifies the point where all outstanding data (transactions) have been written to disk. This checkpoint isn't synchronized with the SCN of a transaction commit and it does not behave like a log switch.

The files you will need as you work through this exercise are included in the code as follows:

sys_archive.sql
sysarchive.lst

Open up the file sysarchive.lst. One of the most important views (anything labeled v$ is called a dynamic view) in the database is v$database.

SYS@NEWDB> SELECT LOG_MODE, NAME, CURRENT_SCN, ARCHIVE_CHANGE#, OPEN_MODE FROM V$DATABASE;

Find this section for the statement from v$log_history farther down in sysarchive.lst. What are all these entries if we aren't in archivelog mode? These are the log switches to the online redo logs. They are overwritten once that section of the redo log is no longer needed by a transaction to maintain consistency. This is where a checkpoint comes into play. It ensures that data is written to the disk and is independent of the ARC log switch process.

Once we switch to archivelog mode, the online redo will still be overwritten, but the ARC process will write a copy of that log to an archive destination. Below you will see that each log contains a range of database SCNs. This log contains database changes from the first SCN number to the next.

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Now we try to correlate archive_change# and checkpoint_change#. Also notice that the checkpoint_change# for each data file is consistent for normal database operations. I am showing only the partial output from the following command for the single data file created:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

At this point, we have started the database in mount mode (the controlfile needs to be accessed, but the database is not opened for full use), turned on the archiving process, and verified that archiving has started and also verified the location of the archived logs. Making a log switch from one online redo to another doesn't sync the checkpoint_change# with what the controlfile has (controlfile_change# is what is also called a thread checkpoint).

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Only when we do a manual checkpoint (instead of a database-activated checkpoint) do tho se numbers coincide. They can be verified with the dynamic view v$datafile as shown below:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Additional information for troubleshooting archiving issues comes from another dynamic view, V$INSTANCE:

Oracle: RDBMS Log Miner Utility, FRA, and AUM

The archiver column can also indicate when the ARC process failed to switch logs with an automatic retry in another five minutes. The log_switch_wait will indicate the wait event the log switching process is waiting on—ARCHIVE LOG, CLEAR LOG, or CHECKPOINT.

All of the activity associated with log switches and checkpoints will influence database performance. We shall continue now with the further required setup steps to complete all of the tasks.

Oracle Database 11g – Underground Advice for Database Administrators A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook
Published: April 2010
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

(Read more interesting articles on Oracle Database 11g here.)

Add supplemental logging

All undo and redo buffer entries are logged along with archiving redo log entries to disk. When would additional supplemental type logging be required? There are additional details needed that are not ordinarily included with undo or redo entries. The additional details make the transactions usable (identify the rows in the redo logs) by Log Miner. These are added to the redo stream by the following commands:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; -- minimal
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL, PRIMARY KEY, UNIQUE,
FOREIGN KEY) COLUMNS; --Identification key logging

Identification key logging

Both the before and after images of the specified type of columns are captured in the redo log files. Both the Foreign Key (FK) logging and of course ALL would be very resource intensive, depending on the number of FK constraints involved with the transaction(s).

Oracle: RDBMS Log Miner Utility, FRA, and AUM

Table-level supplemental logging

Unconditional or Conditional Logging is tracked through log groups: Unconditional tracking would capture the before and after column images no matter what. Conditional tracking captures the before image only when the columns are updated; it always captures the after images. Table level logging would be involved with streams replication scenarios.

ALTER TABLE TABLENAME ADD SUPPLEMENTAL LOG DATA (PRIMARY
KEY,UNIQUE,FOREIGN,ALL) columns;

The following query can be used to check the table level log groups defined in the database:

SYS@NEWDB> select * from DBA_LOG_GROUPS;

Since logging can be resource intensive, DBAs will need to turn it back off after certain tasks are finished. Turning off logging is done by stepping backwards, dropping the most restrictive logging first (key related), and then the generalized drop supplemental commands as shown below:

SYS@NEWDB> alter database drop SUPPLEMENTAL LOG DATA (PRIMARY KEY,
UNIQUE INDEX, FOREIGN KEY, ALL) COLUMNS;
SYS@NEWDB> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Flash(back) Recovery Area (FRA)

You will see the FRA referred to as either the Flash Recovery Area (newer form) versus the older version called Flashback Recovery Area. They are the same thing, though Oracle decided to shift the emphasis so that the FRA should be used for all files associated with backup and recovery and not just Flashback logs.

If a mandatory archivelog destination becomes full at the disk level, it can cause the database to hang, as there is no room to write the archived log. FRA as an archivelog or flashback log destination can hang if you fail to accurately predict the amount of space needed to implement it. There are several recommendations from Oracle as what to store in this area, because it is primarily meant for backup and recovery files:

  • Current control file
  • Online redo logs
  • Archived redo logs
  • Control file autobackups
  • Data file and control file copies
  • Backup pieces
  • Foreign archived redo log (logical standby database in continuous Log Miner mode.)

Space issues with the type of files in the list above are mostly easily regulated by a competent RMAN backup policy. If you decide to implement the Flashback Database Feature, which requires Flashback logs, then running out of space due to a miscalculation or even a runaway process is a definite possibility. The number of logs produced is regulated by the startup database initialization parameter DB_FLASHBACK_RETENTION_TARGET. It is recommended that you start with an FRA at least three times the size of the database.

The MOS document "What is a Flash Recovery Area and how to configure it?" (Doc ID 305648.1) has a formula for estimating the size specific to your environment, as well as actively monitoring changes in the following query:

SYS@NEWDB> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

Normal database processes ensure that the flashback recovery area is kept within certain space limits by deleting the logs under space pressure in two different situations:

  • When archive logs are deleted, the associated flashback logs are removed
  • When the startup initialization parameter DB_RECOVERY_FILE_DEST_SIZE is shrunk, the logs will be removed as needed

Several different steps can be used to resolve database hanging when it runs out of physical space in the FRA:

  • Temporarily increase the size (DB_RECOVERY_FILE_DEST_SIZE).
  • Relocate it to a different directory (DB_RECOVERY_FILE_DEST) with more room.
  • Turn it off completely. Turning it off will require a database restart disconnecting all users.

We will return to investigate this feature. In the next section, flashback will be turned off in order to demonstrate finding data in undo segments. This is just meant to illustrate that the data is coming from the undo segments and not by any enabled Flashback technology. Several of the features have the word Flashback as part of their name but use undo data to perform the actual work.

Automatic Undo Management (AUM)

I f you have installed an 11g database for this exercise, then AUM is the default method of Undo Management. With earlier versions of Oracle, Manual Undo Management was the only method available. DBAs needed to be good at the balancing act of managing rollback segments for different tasks in the database—batch jobs, data loads, smaller online transactions, and so on.

Identifying data in undo segments by flashing back to timestamp

T here is a small trick in identifying data that may still exist in the undo segments. This won't work if the undo has already been overwritten, so it is best used when you are alerted to the mistake quickly. I have eliminated the possibility of flashback technology being involved by disabling it with the following commands (flashback is not required for this exercise):

SYS@NEWDB> SHUTDOWN IMMEDIATE;
SYS@NEWDB> STARTUP MOUNT;
SYS@NEWDB> ALTER DATABASE FLASHBACK OFF;
SYS@NEWDB> ALTER DATABASE OPEN;
SYS@NEWDB> show parameter recovery

  1. Connect as a TTRACKER user, verify the existing data (you should get two rows—DAYTONA and Memphis), and update the tracks table but don't commit.

    TTRACKER@NEWDB> select * from ttracker.tracks;

    TTRACKER@NEWDB> update tracks set trackname='Dallas' where
    trackname ='Memphis';

  2. Any other session with the appropriate privileges can query the past, but only TTRACKER can see the changed (non-committed) data. Choose six minutes because internally Oracle stores the SCN value every five minutes, so rounding may occur if you use less than five minutes. The following queries will select data from the current time and six minutes ago. In this case study, there should be a difference in the data seen.

    TTRACKER@NEWDB> select * from ttracker.tracks;
    TTRACKER@NEWDB> select * from ttracker.tracks as of timestamp
    systimestamp - interval '6' minute ; --6 minutes ago

  3. Now using a different window, connect as sysdba and select using timestamp query syntax. The output demonstrates read consistency again because SYS can only see the DAYTONA and Memphis entries.

    SYS@NEWDB>SELECT * FROM TTRACKER.TRACKS AS OF TIMESTAMP
    SYSTIMESTAMP - INTERVAL '1' MINUTE; --1 minute ago

  4. Switch back to the TTRACKER session and commit the transaction.

    TTRACKER@NEWDB> COMMIT;

  5. Now switch back to the sysdba session. Query the current data found in the tracks table (DAYTONA and Dallas should be returned) as well as the previous data. Six minutes ago the data would be DAYTONA and Memphis.

    SYS@NEWDB> select * from ttracker.tracks;

    SYS@NEWDB>SELECT * FROM TTRACKER.TRACKS AS OF TIMESTAMP
    SYSTIMESTAMP - INTERVAL '6' MINUTE; --6 minutes ago

This method can also restore an object such as a package or procedure by querying the table dba_source in order to retrieve the version from an earlier time. Spool the output, edit as needed, and run as a SQL statement to restore. See the following example:

SYS@NEWDB> SELECT TEXT FROM DBA_SOURCE AS OF TIMESTAMP SYSTIMESTAMP
- INTERVAL '6 MINUTE WHERE NAME ='PACKAGENAME' ORDER BY LINE;

All queries done with the syntax (SELECT AS OF) are considered flashback queries, even though it is accomplished with the undo segments. Oracle has marketed everything with a step back in time as Flashback, but there are really several different major technologies involved, depending on the task. Here is a list of the flashback technologies and the underlying technology involved:

  • Flashback Drop—Recycle Bin
  • Flashback Database—Flashback Logs and RMAN
  • Flashback Transaction Query—Flashback Logs
  • Flashback Queries—Undo Segments
  • Flashback Table—Rowid Changes and Undo Segments
  • Flashback Transaction Backout—Log Miner and Flashback Logs

Flashback Query does not work through DDL operations that modify columns, or drop or truncate tables, and it will not work through a database restart.

Summary

In this article, we saw how the LogMiner utility can be used to access information from certain database components, as well as the very basic Unix command strings. We also saw how FRA can be used for all files associated with backup and recovery, as well as the AUM method of Undo Management.


If you have read this article you may be interested to view:


Oracle Database 11g – Underground Advice for Database Administrators A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook
Published: April 2010
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

About the Author :


April C. Sims

April Sims is currently the Database Administrator at Southern Utah University and an Oracle Certified Professional: 8i, 9i, and 10g with a master's degree in Business Administration from the University of Texas at Dallas. Involved as a volunteer with the Independent Oracle Users Group for over 7 years, April is currently a Contributing Editor for the IOUG "SELECT" Journal. April is an annual presenter at Oracle OpenWorld, IOUG COLLABORATE, and numerous regional Oracle-related conferences.

Books From Packt


Oracle Coherence 3.5
Oracle Coherence 3.5

Oracle SOA Suite 11g R1 Developer's Guide
Oracle SOA Suite 11g R1 Developer's Guide

Oracle Application Express 3.2 – The Essentials and More
Oracle Application Express 3.2 – The Essentials and More

Oracle VM Manager 2.1.2
Oracle VM Manager 2.1.2

Oracle 11g Streams Implementer's Guide
Oracle 11g Streams Implementer's Guide

The Oracle Universal Content Management Handbook
The Oracle Universal Content Management Handbook

Oracle Web Services Manager
Oracle Web Services Manager

Oracle Application Express Forms Converter
Oracle Application Express Forms Converter

Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software