Oracle: When to use Log Miner

Log Miner has both a GUI interface in OEM as well as the database package, DBMS_LOGMNR. When this utility is used by the DBA, its primary focus is to mine data from the online and archived redo logs. Internally Oracle uses the Log Miner technology for several other features, such as Flashback Transaction Backout, Streams, and Logical Standby Databases. This section is not on how to run Log Miner, but looks at the task of identifying the information to restore.

The Log Miner utility comes into play when you need to retrieve an older version of selected pieces of data without completely recovering the entire database. A complete recovery is usually a drastic measure that means downtime for all users and the possibility of lost transactions. Most often Log Miner is used for recovery purposes when the data consists of just a few tables or a single code change.

Make sure supplemental logging is turned on (see the Add Supplemental Logging section). In this case, you discover that one or more of the following conditions apply when trying to recover a small amount of data that was recently changed:

  • Flashback is not enabled
  • Flashback logs that are needed are no longer available
  • Data that is needed is not available in the online redo logs
  • Data that is needed has been overwritten in the undo segments

Go to the last place available: archived redo logs. This requires the database to be in archivelog mode and for all archive logs that are needed to still be available or recoverable.

Identifying the data needed to restore

One of the hardest parts of restoring data is determining what to restore, the basic question being when did the bad data become part of the collective? Think the Borg from Star Trek! When you need to execute Log Miner to retrieve data from a production database, you will need to act fast. The older the transactions the longer it will take to recover and traverse with Log Miner. The newest (committed) transactions are processed first, proceeding backwards. The first question to ask is when do you think the bad event happened? Searching for data can be done in several different ways:

  • SCN, timestamp, or log sequence number>
  • Pseudo column ORA_ROWSCN

SCN, timestamp, or log sequence number

If you are lucky, the application also writes a timestamp of when the data was last changed. If that is the case, then you determine the archive log to mine by using the following queries. It is important to set the session NLS_DATE_FORMAT so that the time element is displayed along with the date, otherwise you will just get the default date format of DD-MMM-RR. The data format comes from the database startup parameters— the NLS_TERRITORY setting. Find the time when a log was archived and match that to the archive log needed.

Pseudo column ORA_ROWSCN

While this method seems very elegant, it does not work perfectly, meaning it won't always return the correct answer. As it may not work every time or accurately, it is generally not recommended for Flashback Transaction Queries. It is definitely worth trying to narrow the window that you will have to search. It uses the SCN information that was stored for the associated transaction in the Interested Transaction List.

You know that delayed block cleanout is involved. The pseudo column ORA_ROWSCN contains information for the approximate time this table was updated for each row. In the following example the table has three rows, with the last row being the one that was most recently updated. It gives me the time window to search the archive logs with Log Miner.

Log Miner is the basic technology behind several of the database Maximum Availability Architecture capabilities—Logical Standby, Streams, and the following Flashback Transaction Backout exercise.

Flashback Transaction Query and Backout

Flashback technology was first introduced in Oracle9i Database. This feature allows you to view data at different points in time and with more recent timestamps (versions), and thus provides the capability to recover previous versions of data. In this article, we are dealing with Flashback Transaction Query (FTQ) and Flashback Transaction Backout (FTB), because they both deal with transaction IDs and integrate with the Log Miner utility. See the MOS document: "What Do All 10g Flashback Features Rely on and what are their Limitations?" (Doc ID 435998.1).

Flashback Transaction Query uses the transaction ID (Xid) that is stored with each row version in a Flashback Versions Query to display every transaction that changed the row. Currently, the only Flashback technology that can be used when the object(s) in question have been changed by DDL is Flashback Data Archive. There are other restrictions to using FTB with certain data types (VARRAYs, BFILES), which match the data type restrictions for Log Miner. This basically means if data types aren't supported, then you can't use Log Miner to find the undo and redo log entries.

When would you use FTQ or FTB instead of the previously described methods? The answer is when the data involves several tables with multiple constraints or extensive amounts of information. Similar to Log Miner, the database can be up and running while people are working online in other schemas of the database to accomplish this restore task.

An example of using FTB or FTQ would be to reverse a payroll batch job that was run with the wrong parameters. Most often a batch job is a compiled code (like C or Cobol) run against the database, with parameters built in by the application vendor. A wrong parameter could be the wrong payroll period, wrong set of employees, wrong tax calculations, or payroll deductions.

Enabling flashback logs

First off all flashback needs to be enabled in the database. Oracle Flashback is the database technology intended for a point-in-time recovery (PITR) by saving transactions in flashback logs. A flashback log is a temporary Oracle file and is required to be stored in the FRA, as it cannot be backed up to any other media. Extensive information on all of the ramifications of enabling flashback is found in the documentation labeled: Oracle Database Backup and Recovery User's Guide.

See the following section for an example of how to enable flashback:

recovery_area/NEWDB' SCOPE=BOTH;
--this is sized for a small test database

The following query would then verify that FLASHBACK had been turned on:


Flashback Table

The Flashback Table command will restore a table before a designated event occurs (this is often referred to as logical corruption) by restore points, SCN, or timestamp. There is another Oracle pseudo column associated with a table called a rowid, which is involved with the Flashback Table feature. This rowid consists of four components:

  • The object number
  • The data file the row exists in
  • The data block of the data file
  • The position of the row in the data block

The underlying mechanism moves the table back in time by changing the rowids. To allow a table to change rowids, the following statement must be run for each one involved in the flashback session:


Since flashback is enabled, there will be an entry in FLASHBACK_TRANSACTION_ QUERY for the above delete statement. Also notice it is identified by rowid. A corresponding query and full output has been moved to the code directory for this article (see flashback_tq.txt). The screenshot below is a small snippet of all the rows returned:

Choosing the correct SCN in the above query, enables you to flashback the data to being as it was before the delete statement.



After flashing back the table, the view DBA_FLASHBACK_TXN_STATE is populated. This view will contain other completed flashback activity in this database, reflecting the consistent state of the database when queried. This means other flashback events may back out other dependent or compensating Xid(s), so they no longer appear in this view.

As an alternative, you could refer back in this article to the sections about Log Miner (Identifying the data needed to restore and SCN, Timestamp, or Log Sequence Number). This will identify the log needed for extracting the data using Log Miner. Finish the task below for Log Miner by identifying the UNDO statements that can be manually run in order to restore the deleted data.

For more information on decoding the rowid into a readable format visit Oracle/Decoding-the-Oracle-ROWID-and-some-Base64-forfun. html.

Flashback Transaction Query with pseudo columns

With Flashback Transaction Query, you can see all transactions that changed a row within a certain time period. It is based on the Xid providing not only the change, as seen in a Flashback Versions Query, but also who made the change.

The view FLASHBACK_TRANSACTION_QUERY actually provides SQL statements that you can use to undo the changes made by a particular transaction. In the example above, it contains the original delete statement that was actually executed plus the insert statements to undo the logical corruption.

Flashback Transaction Backout

Looking at the MOS document flashback transactions using dbms_flashback. transaction_backout procedure (Doc ID 737332.1), Log Miner is what actually extracts the information whereupon the package DBMS_FLASHBACK then rolls back the transactions. In 11gR2 the Flashback Transaction Backout process can track foreign key dependencies as compared to earlier versions of Oracle, but this requires resource-intensive foreign key supplemental logging.

Enabling flashback (using the command ALTER DATABASE FLASHBACK ON) records all of the critical pieces of information for recovery purposes into a flashback log for any particular row in the database. That information includes the physical rowid, Xid of a committed transaction, SCN, and timestamp. Once flashback is enabled for database use, the appropriate technology to extract the data is needed (Log Miner, Transaction Query, Transaction Backout) to resolve logical corruption issues.

There are exceptions and certain conditions that apply to using flashback and its associated technologies, as well as the limitations of creating additional flashback logs that use resources. As a DBA you will need to decide if the tradeoff in resource consumption is worth the benefit it provides. Oracle has reports from customers that the overhead may be from a low 2 percent increase (OLTP type) in the end user response time to over 30 percent (for a direct load Data Warehouse database). This overhead is partly due to the increased amount of redo being generated (an increased amount of information is captured), also there's more CPU and I/O to write the flashback logs to disk.

Using strings to remove binary components

An even easier way than dump block is to use the Unix strings command to find data in a binary file. This strings command can also strip the binary components from a spfile, which is helpful in situations when you don't have access to a working instance and want to see what is actually in the spfile.

In 11g you can create a pfile from an existing spfile even without a database instance. In the following example, a correctly formatted pfile would be created from $ORACLE_HOME/dbs/spfileNEWDB.ora (Unix example):

SYS@NEWDB>create pfile from spfile;

Using strings comes in handy for previous versions of Oracle databases, or if you don't have an installed $ORACLE_HOME to work with, just a standalone spfile. When using the strings command on an spfile, the 11g version will split long lines, so you will need to rejoin them with a text editor. The stripped spfile can be edited as needed and then copied to the appropriate directory (see OFA). It can also be renamed to be used as an init<ORACLE_SID>.ora file to start up a database. It is recommended to use the spfile for normal database startup. Using the pfile version is for testing or for circumstances that require its temporary use.

Strings can also pull out data from an export file, datapump, redo, or archive log. Several examples of using the strings command are as follows:

A s an example, I exported the table TTRACKER.TRACKS that contains a single row to show the data inside, and then used the strings command. Here is an excerpt from the export file after running the strings command:

Occasionally, you have to extract the data from a database for various reasons. Both of the utilities export and datapump allow you to do that effectively. They can also be used to recreate missing objects or even the entire database. Datapump is much more efficient for large amounts of data and gives more functionality than the older export utility, which will eventually become obsolete. Both import and export still exist in 11g for migrating databases from older versions of Oracle, and it was used for this demonstration for readability purposes.

Using strings on the actual data file reveals the data within, showing DAYTONA as the last committed entry.


In this article, we saw how LogMiner can be used to to retrieve an older version of selected pieces of data without completely recovering the entire database.

If you have read this article you may be interested to view :

You've been reading an excerpt of:

Oracle Database 11g - Underground Advice for Database Administrators

Explore Title