What is New in 12c

Exclusive offer: get 50% off this eBook here
Oracle Database 12c Backup and Recovery Survival Guide

Oracle Database 12c Backup and Recovery Survival Guide — Save 50%

A comprehensive guide for every DBA to learn recovery and backup solutions with this book and ebook

$35.99    $18.00
by Aman Sharma Francisco Munoz Alvarez | September 2013 | Enterprise Articles

This article by Francisco Munoz Alvarez and Aman Sharma, the authors of Oracle Database 12c Backup and Recovery Survival Guide, covers topics such as pluggable database, RMAN's new features and enhancements, and Data Pump's new features and enhancements.

In this article, we will cover the following topics:

  • Pluggable database
  • RMAN's new features and enhancements
  • Data Pump's new features and enhancements

(For more resources related to this topic, see here.)

Oracle Database 12c has introduced many new features and enhancements for backup and recovery. This article will introduce you to some of them and you will have the opportunity to learn in more detail how they could be used in real life situations. But I cannot start talking about Oracle 12 c without talking first about a revolutionary whole new concept that was introduced with this new version of the database product, called Multitenant Container Database( CDB ) that will contain two or more pluggable databases ( PDB ).

When a container database only contains one PDB it is called Single Tenant Container Database. You can also have your database on Oracle 12c using the same format as before 12c, it will be called non-CDB database and will not allow the use of PDBs.

Pluggable database

We are now able to have multiple databases sharing a single instance and Oracle binaries. Each of the databases will be configurable to a degree and will allow some parameters to be set specifically for themselves (due that they will share the same initialization parameter file) and what is better, each database will be completely isolated from each other without either knowing that the other exists.

A CDB is a single physical database that contains a root container with the main Oracle data dictionary and at least one PDB with specific application data. A PDB is a portable container with its own data dictionary, including metadata and internal links to the system-supplied objects in the root container, and this PDB will appear to an Oracle Net client as a traditional Oracle database. The CDB also contains a PDB called SEED, which is used as a template when an empty PDB needs to be created. The following figure shows an example of a CDB with five PDBs:

When creating a database on Oracle 12 c , you can now create a CDB with one or more PDBs, and what is even better is that you can easily clone a PDB, or unplug it and plug it into a different server with a preinstalled CDB, if your target server is running out of resources such as CPU or memory. Many years ago, the introduction of external storage gave us the possibility to store data on external devices and the flexibility to plug and unplug them to any system independent of their OS. For example, you can connect an external device to a system using Windows XP and read your data without any problems. Later you can unplug it and connect it to a laptop running Windows 7 and you will still be able to read your data. Now with the introduction of Oracle pluggable databases, we will be able to do something similar with Oracle when upgrading a PDB, making this process simple and easy. All you will need to do to upgrade a PDB, as per example, is:

  1. Unplug your PDB (step 1 in the following figure) that is using a CDB running 12.1.0.1.
  2. Copy the PDB to the destination location with a CDB that is using a later version such as 12.2.0.1 (step 2 in the following figure).
  3. Plug the PDB to the CDB (step 3 in the following figure), and your PDB is now upgraded to 12.2.0.1.

This new concept is a great solution for database consolidation and is very useful for multitenant SaaS (Software as a Service) providers, improving resource utilization, manageability, integration, and service management.

Some key points about pluggable databases are:

  • You can have many PDBs if you want inside a single container (a CDB can contain a maximum of 253 PDBs)
  • A PDB is fully backwards compatible with an ordinary pre-12.1 database in an applications perspective, meaning that an application built for example to run on Oracle 11.1 will have no need to be changed to run on Oracle 12c
  • A system administrator can connect to a CDB as a whole and see a single system image
  • If you are not ready to make use of this new concept, you can still be able to create a database on Oracle 12c as before, called non-CDB (non-Container Database)
  • Each instance in RAC opens the CDB as a whole. A foreground session will see only the single PDB it is connected to and sees it just as a non-CDB
  • The Resource Manager is extended with some new between-PDB capabilities
  • Fully integrated with Oracle Enterprise Manager 12c and SQL Developer
  • Fast provisioning of new databases (empty or as a copy/clone of an existing PDB)
  • On Clone triggers can be used to scrub or mask data during a clone process
  • Fast unplug and plug between CDBs
  • Fast path or upgrade by unplugging a PDB and plugging it into a different CDB already patched or with a later database version
  • Separation of duties between DBA and application administrators
  • Communication between PDBs is allowed via intra-CDB dblinks
  • Every PDB has a default service with its name in one Listener
  • An unplugged PDB carries its lineage, Opatch, encryption key info, and much more
  • All PDBs in a CDB should use the same character set
  • All PDBs share the same control files, SPFILE, redo log files, flashback log files, and undo
  • Flashback PDB is not available on 12.1, it expected to be available with 12.2
  • Allows multitenancy of Oracle Databases, very useful for centralization, especially if using Exadata

Multitenant Container Database is only available for Oracle Enterprise Edition as a payable option, all other editions of the Oracle database can only deploy non-CDB or Single Tenant Pluggable databases.

RMAN new features and enhancements

Now we can continue and take a fast and closer look at some of the new features and enhancements introduced in this database version for RMAN.

Container and pluggable database backup and restore

As we saw earlier, the introduction of Oracle 12c and the new pluggable database concept made it possible to easily centralize multiple databases maintaining the individuality of each one when using a single instance. The introduction of this new concept also forced Oracle to introduce some new enhancements to the already existent BACKUP, RESTORE, and RECOVERY commands to enable us to be able to make an efficient backup or restore of the complete CDB. This includes all PDBs or just one of more PDBs, or if you want to be more specific, you can also just backup or restore one or more tablespaces from a PDB.

Some examples of how to use the RMAN commands when performing a backup on Oracle 12c are:

RMAN> BACKUP DATABASE; (To backup the CBD + all PDBs) RMAN> BACKUP DATABASE root; (To backup only the CBD) RMAN> BACKUP PLUGGABLE DATABASE pdb1,pdb2; (To backup all specified PDBs) RMAN> BACKUP TABLESPACE pdb1:example; (To backup a specific tablespace in a PDB)

Some examples when performing RESTORE operations are:

RMAN> RESTORE DATABASE; (To restore an entire CDB, including all PDBs) RMAN> RESTORE DATABASE root; (To restore only the root container) RMAN> RESTORE PLUGGABLE DATABASE pdb1; (To restore a specific PDB) RMAN> RESTORE TABLESPACE pdb1:example; (To restore a tablespace in a PDB)

Finally, some example of RECOVERY operations are:

RMAN> RECOVER DATABASE; (Root plus all PDBs) RMAN> RUN { SET UNTIL SCN 1428; RESTORE DATABASE; RECOVER DATABASE; ALTER DATABASE OPEN RESETLOGS; } RMAN> RUN } RESTORE PLUGGABLE DATABASE pdb1 TO RESTORE POINT one; RECOVER PLUGGABLE DATABASE pdb1 TO RESTORE POINT one; ALTER PLUGGABLE DATABASE pdb1 OPEN RESETLOGS;}

Enterprise Manager Database Express

The Oracle Enterprise Manager Database Console or Database Control that many of us used to manage an entire database is now deprecated and replaced by the new Oracle Enterprise Manager Database Express. This new tool uses Flash technology and allows the DBA to easily manage the configurations, storage, security, and performance of a database. Note that RMAN, Data Pump, and the Oracle Enterprise Manager Cloud Control are now the only tools able to perform backup and recovery operations in a pluggable database environment, in other words, you cannot use the Enterprise Manager Database Express for database backup/recovery operations.

Backup privileges

Oracle Database 12c provides separation support for the separation of DBA duties for the Oracle Database by introducing task-specific and least privileged administrative privileges for backups that do not require the SYSDBA privilege. The new system privilege introduced with this new release is SYSBACKUP.

Avoid the use of the SYSDBA privilege for backups unless it is strictly necessary.

When connecting to the database using the AS SYSDBA system privilege, you are able to see any object structure and all the data within the object, whereas if you are connecting using the new system privilege AS SYSBACKUP, you will still be able to see the structure of an object but not the object data. If you try to see any data using the SYSBACKUP privilege, the ORA-01031: insufficient privileges message will be raised.

Tighter security policies require a separation of duties. The new SYSBACKUP privilege facilitates the implementation of the separation of duties, allowing backup and recovery operations to be performed without implicit access to the data, so if access to the data is required for one specific user, it will need to be granted explicitly to this user.

RMAN has introduced some changes when connecting to a database such as:

  • TARGET: It will require the user to have the SYSBACKUP administrative privilege to be able to connect to the TARGET database
  • CATALOG: As in the earlier versions a user was required to have the RECOVERY_CATALOG_OWNER role assigned to be able to connect to the RMAN catalog, now it will need to have assigned the SYSBACKUP privilege to be able to connect to the catalog
  • AUXILIARY: It will require the SYSBACKUP administrative privilege to connect to the AUXILIARY database

Some important points about the SYSBACKUP administrative privilege are:

  • It includes permissions for backup and recovery operations
  • It does not include data access privileges such as SELECT ANY TABLE that the SYSDBA privilege has
  • It can be granted to the SYSBACKUP user that is created during the database installation process
  • It's the default privilege when a RMAN connection string is issued and does not contain the AS SYSBACKUP clause:

    $ RMAN TARGET /

    Before connecting as the SYSBACKUP user created during the database creation process, you will need to unlock the account and grant the SYSBACKUP privilege to the user. When you use the GRANT command to give the SYSBACKUP privilege to a user, the username and privilege information will be automatically added to the database password file.

The v$pwfile_users view contains all information regarding users within the database password file and indicates whether a user has been granted any privileged system privilege. Let's take a closer look to this view:

SQL> DESC v$pwfile_users
Name Null? Type
----------------------------- -------- -----------------
USERNAME VARCHAR2(30)
SYSDBA VARCHAR2(5)
SYSOPER VARCHAR2(5)
SYSASM VARCHAR2(5)
SYSBACKUP VARCHAR2(5)
SYSDG VARCHAR2(5)
SYSKM VARCHAR2(5)
CON_ID NUMBER

As you can see, this view now contains some new columns, such as:

  • SYSBACKUP: It indicates if the user is able to connect using the SYSBACKUP privileges
  • SYSDG: It indicates if the user is able to connect using the SYSDG (new for Data Guard) privileges
  • SYSKM: It indicates if the user is able to connect using the SYSKM (new for Advanced Security) privileges.
  • CON_ID: It is the ID of the current container. If 0, it will indicate that it is related to the entire CDB or to an entire traditional database (non-CDB): if the value is 1, then this user has the access only to root; if other value, then the view will identify a specific container ID.

To help you clearly understand the use of the SYSBACKUP privilege, let's run a few examples to make it completely clear.

Let's connect to our newly created database as SYSDBA and take a closer look at the SYSBACKUP privilege:

$ sqlplus / as sysdba
SQL> SET PAGES 999
SQL> SET LINES 99
SQL> COL USERNAME FORMAT A21
SQL> COL ACCOUNT_STATUS FORMAT A20
SQL> COL LAST_LOGIN FORMAT A41
SQL> SELECT username, account_status, last_login
2 FROM dba_users
3 WHERE username = 'SYSBACKUP';
USERNAME ACCOUNT_STATUS LAST_LOGIN
------------ -------------------- -----------------------
SYSBACKUP EXPIRED & LOCKED

As you can see, the SYSBACKUP account created during the database creation is currently EXPIRED & LOCKED, you will need to unlock this account and grant the SYSBACKUP privilege to it if you want to use this user for any backup and recovery purposes:

For this demo I will use the original SYSBACKUP account, but in a production environment never use the SYSBACKUP account, instead grant the SYSBACKUP privilege to the user(s) that will be responsible for the backup and recovery operations.

SQL> ALTER USER sysbackup IDENTIFIED BY "demo" ACCOUNT UNLOCK; User altered. SQL> GRANT sysbackup TO sysbackup; Grant succeeded. SQL> SQL> SELECT username, account_status 2 FROM dba_users 3 WHERE account_status NOT LIKE '%LOCKED'; USERNAME ACCOUNT_STATUS --------------------- -------------------- SYS OPEN SYSTEM OPEN SYSBACKUP OPEN

We can also easily identify what system privileges and roles are assigned to SYSBACKUP by executing the following SQLs:

SQL> COL grantee FORMAT A20 SQL> SELECT * 2 FROM dba_sys_privs 3 WHERE grantee = 'SYSBACKUP'; GRANTEE PRIVILEGE ADM COM ------------- ----------------------------------- --- --- SYSBACKUP ALTER SYSTEM NO YES SYSBACKUP AUDIT ANY NO YES SYSBACKUP SELECT ANY TRANSACTION NO YES SYSBACKUP SELECT ANY DICTIONARY NO YES SYSBACKUP RESUMABLE NO YES SYSBACKUP CREATE ANY DIRECTORY NO YES SYSBACKUP UNLIMITED TABLESPACE NO YES SYSBACKUP ALTER TABLESPACE NO YES SYSBACKUP ALTER SESSION NO YES SYSBACKUP ALTER DATABASE NO YES SYSBACKUP CREATE ANY TABLE NO YES SYSBACKUP DROP TABLESPACE NO YES SYSBACKUP CREATE ANY CLUSTER NO YES 13 rows selected. SQL> COL granted_role FORMAT A30 SQL> SELECT * 2 FROM dba_role_privs 3 WHERE grantee = 'SYSBACKUP'; GRANTEE GRANTED_ROLE ADM DEF COM -------------- ------------------------------ --- --- --- SYSBACKUP SELECT_CATALOG_ROLE NO YES YES

Where the column ADMIN_OPTION refers to if the user has or not, the ADMIN_OPTION privilege, the column DEFAULT_ROLE indicates whether or not ROLE is designated as a default role for the user, and the column COMMON refers to if it's common to all the containers and pluggable databases available.

SQL and DESCRIBE

As you know well, you are able to execute the SQL commands, and the PL/SQL procedures from the RMAN command line starting with Oracle 12.1, do not require the use of the SQL prefix or quotes for most SQL commands in RMAN.

You can now run some simple SQL commands in RMAN such as:

RMAN> SELECT TO_CHAR(sysdate,'dd/mm/yy - hh24:mi:ss') 2> FROM dual; TO_CHAR(SYSDATE,'DD) ------------------- 17/09/12 - 02:58:40 RMAN> DESC v$datafile Name Null? Type --------------------------- -------- ------------------- FILE# NUMBER CREATION_CHANGE# NUMBER CREATION_TIME DATE TS# NUMBER RFILE# NUMBER STATUS VARCHAR2(7) ENABLED VARCHAR2(10) CHECKPOINT_CHANGE# NUMBER CHECKPOINT_TIME DATE UNRECOVERABLE_CHANGE# NUMBER UNRECOVERABLE_TIME DATE LAST_CHANGE# NUMBER LAST_TIME DATE OFFLINE_CHANGE# NUMBER ONLINE_CHANGE# NUMBER ONLINE_TIME DATE BYTES NUMBER BLOCKS NUMBER CREATE_BYTES NUMBER BLOCK_SIZE NUMBER NAME VARCHAR2(513) PLUGGED_IN NUMBER BLOCK1_OFFSET NUMBER AUX_NAME VARCHAR2(513) FIRST_NONLOGGED_SCN NUMBER FIRST_NONLOGGED_TIME DATE FOREIGN_DBID NUMBER FOREIGN_CREATION_CHANGE# NUMBER FOREIGN_CREATION_TIME DATE PLUGGED_READONLY VARCHAR2(3) PLUGIN_CHANGE# NUMBER PLUGIN_RESETLOGS_CHANGE# NUMBER PLUGIN_RESETLOGS_TIME DATE CON_ID NUMBER RMAN> ALTER TABLESPACE users 2> ADD DATAFILE '/u01/app/oracle/oradata/cdb1/pdb1/user02.dbf' size 50M; Statement processed

Remember that the SYSBACKUP privilege does not grant access to the user tables or views, but the SYSDBA privilege does.

Multi-section backups for incremental backups

Oracle Database 11g introduced multi-section backups to allow us to backup and restore very large files using backup sets (remember that Oracle datafiles can be up to 128 TB in size). Now with Oracle Database 12c , we are able to make use of image copies when creating multi-section backups as a complement of the previous backup set functionality.

This helps us to reduce image copy creation time for backups, transporting tablespaces, cloning, and doing a TSPITR (tablespace point-in-time recovery), it also improves backups when using Exadata.

The main restrictions to make use of this enhancement are:

  • The COMPATIBLE initialization parameter needs to be set to 12.0 or higher to make use of the new image copy multi-section backup feature
  • This is only available for datafiles and cannot be used to backup control or password files
  • Not to be used with a large number of parallelisms when a file resides on a small number of disks, to avoid each process to compete with each other when accessing the same device

Another new feature introduced with multi-section backups is the ability to create multi-section backups for incremental backups. This will allow RMAN to only backup the data that has changed since the last backup, consequently enhancing the performance of multi-section backups due that they are processed independently, either serially or in parallel.

Network-based recovery

Restoring and recovering files over the network is supported starting with Oracle Database 12c . We can now recover a standby database and synchronize it with its primary database via the network without the need to ship the archive log files.

When the RECOVER command is executed, an incremental backup is created on the primary database. It is then transferred over the network to the physical standby database and applied to the standby database to synchronize it within the primary database. RMAN uses the SCN from the standby datafile header and creates the incremental backup starting from this SCN on the primary database, in other words, only bringing the information necessary to the synchronization process. If block change tracking is enabled for the primary database, it will be used while creating the incremental backup making it faster.

A network-based recovery can also be used to replace any missing datafiles, control files, SPFILE, or tablespaces on the primary database using the corresponding entity from the physical standby to the recovery operation. You can also use multi-section backup sets, encryption, or even compression within a network-based recovery.

Active Duplicate

The Active Duplicate feature generates an online backup on the TARGET database and directly transmits it via an inter-instance network connection to the AUXILIARY database for duplication (not written to disk in the source server). Consequently, this reduces the impact on the TARGET database by offloading the data transfer operation to the AUXILIARY database, also reducing the duplication time. This very useful feature has now received some important enhancements. In Oracle 11 g when this feature was initially introduced, it only allowed us to use a push process based on the image copies. Now it allows us to make use of the already known push process or to make use of the newly introduced pull process from the AUXILIARY database that is based on backup sets (the pull process is now the new default and automatically copies across all datafiles, control files, SPFILE and archive log files). Then it performs the restore of all files and uses a memory script to complete the recovery operation and open the AUXILIARY database. RMAN will dynamically determine, based on your DUPLICATE clauses, which process will be used (push or pull).

It is very possible that soon Oracle will end deprecating the push process on the future releases of the database.

You can now choose your choice of compression, section size, and encryption to be used during the Active Duplication process. For example, if you specify the SET ENCRYPTION option before the DUPLICATE command, all the backups sent from the target to the auxiliary database will be encrypted.

For an effective use of parallelism, allocate more AUXILIARY channels instead of TARGET channels as in the earlier releases.

Finally, another important new enhancement is the possibility to finish the duplication process with the AUXILIARY database in not open state (the default is to open the AUXILIARY database after the duplication is completed).

This option is very useful when you are required to:

  • Modify the block change tracking
  • Configure fast incremental backups or flashback database settings
  • Move the location of the database, for example, to ASM
  • Upgrade the AUXILIARY database (due that the database must not be open with reset logs prior to applying the upgrade scripts)
  • Or when you know that the attempt to open the database would produce errors

To make it clearer, let's take a closer look at what operations RMAN will perform when a DUPLICATE command is used:

  1. Create an SPFILE string for the AUXILIARY instance.
  2. Mount the backup control file.
  3. Restore the TARGET datafiles on the AUXILIARY database.
  4. Perform incomplete recovery using all the available incremental backups and archived redo log files.
  5. Shut down and restart the AUXILIARY instance in the NOMOUNT mode.
  6. Create a new control file, create and store the new database ID in the datafiles (it will not happen if the FOR STANDBY clause is in use).
  7. Mount and opens the duplicate database using the RESETLOGS option, and create the online redo log files by default. If the NOOPEN option is used, the duplicated database will not be opened with RESETLOGS and will remain in the MOUNT state.

Here are some examples of how to use the DUPLICATE command with PDBs:

RMAN> DUPLICATE TARGET DATABASE TO <CDB1>; RMAN> DUPLICATE TARGET DATABASE TO <CDB1> PLUGGABLE DATABASE <PDB1>, <PDB2>, <PDB3>;

Support for the third-party snapshot

In the past when using a third-party snapshot technology to make a backup or clone of a database, you were forced to change the database to the backup mode (BEGIN BACKUP) before executing the storage snapshot. This requirement is no longer necessary if the following conditions are met:

  • The database crash is consistent at the point of the snapshot
  • Write ordering is preserved for each file within the snapshot
  • The snapshot stores the time at which the snapshot is completed

    If a storage vendor cannot guarantee compliance with the conditions discussed, then you must place your database in backup mode before starting with the snapshot.

The RECOVER command now has a newly introduced option called SNAPSHOT TIME that allows RMAN to recover a snapshot that was taken without being in backup mode to a consistent point-in-time.

Some examples of how to use this new option are:

RMAN> RECOVER DATABASE UNTIL TIME
'10/12/2012 10:30:00' SNAPSHOT TIME '10/12/2012 10:00:00';
RMAN> RECOVER DATABASE UNTIL CANCEL SNAPSHOT TIME '10/12/2012 10:00:00';

Only trust your backups after you ensure that they are usable for recovery. In other words, always test your backup methodology first, ensuring that it can be used in the future in case of a disaster.

Cross-platform data transport

Starting with Oracle 12c, transporting data across platforms can be done making use of backup sets and also create cross-platform inconsistent tablespace backups (when the tablespace is not in the read-only mode) using image copies and backup sets.

When using backup sets, you are able to make use of the compression and multi-section options, reducing downtime for the tablespace and the database platform migrations.

RMAN does not catalog backup sets created for cross-platform transport in the control file, and always takes into consideration the endian format of the platforms and the database open mode.

Before creating a backup set that will be used for a cross-platform data transport, the following prerequisites should be met:

  • The compatible parameter in the SPFILE string should be 12.0 or greater
  • The source database must be open in read-only mode when transporting an entire database due that the SYS and SYSAUX tablespaces will participate in the transport process
  • If using Data Pump, the database must be open in read-write mode

You can easily check the current compatible value and open_mode of your database by running the following SQL commands:

SQL> SHOW PARAMETER compatible NAME TYPE VALUE ---------------------- ----------- ---------------------- compatible string 12.0.0.0.0 SQL> SELECT open_mode FROM v$database; OPEN_MODE -------------------- READ WRITE

When making use of the FOR TRANSPORT or the TO PLATFORM clauses in the BACKUP command, you cannot make use of the following clauses:

  • CUMULATIVE
  • forRecoveryOfSpec
  • INCREMENTAL LEVEL n
  • keepOption
  • notBackedUpSpec
  • PROXY
  • SECTION SIZE
  • TAG
  • VALIDATE

Table recovery

In previous versions of Oracle Database, the process to recover a table to a specific point-in-time was never easy. Oracle has now solved this major issue by introducing the possibility to do a point-in-time recovery of a table, group of tables or even table partitions without affecting the remaining database objects using RMAN. This makes the process easier and faster than ever before. Remember that Oracle has previously introduced features such as database point-in-time recovery ( DBPITR ), tablespace point-in-time recovery ( TSPITR ) and Flashback database; this is an evolution of the same technology and principles.

The recovery of tables and table partitions is useful in the following situations:

  • To recover a very small set of tables to a particular point-in-time
  • To recover a tablespace that is not self-contained to a particular point-in-time, remember that TSPITR can only be used if the tablespace is self-contained
  • To recover tables that are corrupted or dropped with the PURGE option, so the FLASHBACK DROP functionality is not possible to be used
  • When logging for a Flashback table is enabled but the flashback target time or SCN is beyond the available undo
  • To recover data that was lost after a data definition language ( DDL ) operation that changed the structure of a table

To recover tables and table partitions from a RMAN backup, the TARGET database should be (prerequisites):

  • At the READ/WRITE mode
  • In the ARCHIVELOG mode
  • The COMPATIBLE parameter should be set to 12.0 or higher

    You cannot recover tables or table partitions from the SYS, SYSTEM and SYSAUX schemas, or even from a standby database.

Now let's take a closer look at the steps to do a table or table partitions recovery using RMAN:

  1. First check if all the prerequisites to do a table recovery are met.
  2. Start a RMAN session with the CONNECT TARGET command.
  3. Use the RECOVER TABLE command with all the required clauses.
  4. RMAN will determine which backup contains the data that needs to be recovered based on the point-in-time specified.
  5. RMAN creates an AUXILIARY instance, you can also specify the location of the AUXILIARY instance files using the AUXILIARY DESTINATION or SET NEWNAME clause.
  6. RMAN recovers the specified objects into the AUXILIARY instance.
  7. RMAN creates a Data Pump export dump file that contains the objects.
  8. RMAN imports the recovered objects from the dump file previously created into the TARGET database. If you want to manually import the objects to the TARGET database, you can make use of the NOTABLEIMPORT clause in the RECOVER command to achieve this goal.
  9. RMAN optionally offers the possibility to rename the recovered objects in the TARGET database using the REMAP TABLE clause, or to import the recovered objects to a different tablespace using the REMAP TABLESPACE clause.

An example of how to use the new RECOVER TABLE command is:

RMAN> RECOVER TABLE SCOTT.test UNTIL SEQUENCE 5481 THREAD 2 AUXILARY DESTINATION '/tmp/recover' REMAP TABLE SCOTT.test:my_test;

Oracle Database 12c Backup and Recovery Survival Guide A comprehensive guide for every DBA to learn recovery and backup solutions with this book and ebook
Published: September 2013
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

Data Pump's new features and enhancements

Now is the time to take a closer look at some of the new features and enhancements of Data Pump introduced in this database version on Data Pump.

Disabling LOGGING on Data Pump Import

A new feature of Data Pump introduced with Oracle 12 c has the possibility to disable logging generation during an import operation, allowing us to have faster imports due that the redo log information is not written to the disk or even archived. This is particularly useful for large data loads such as database migrations.

When making use of a NOLOGGING option, always perform a full RMAN backup after the NOLOGGING operation is completed.

This new feature is now possible due to the introduction of a new metadata TRANSFORM parameter called DISABLE_ARCHIVE_LOGGING; It can be used on the impdp command line or even when using the DBMS_DATAPUMP.METADATA_TRANSFORM PL/SQL procedure.

Not all operations are logged when using DISABLE_ARCHIVE_LOGGING, the following operations during an import process will still be logged:

  • The CREATE and ALTER statements (the only exception is the CREATE INDEX statement)
  • All operations against the master table that is being used by Data Pump to coordinate its activities

    If the database is in the FORCE LOGGING mode, logging will not be disabled during an import operation that is making use of the DISABLE_ARCHIVE_LOGGING metadata TRANSFORM parameter.

Here is one example of how this new feature can be used:

$ impdp test/test TABLES=test.test1 DIRECTORY=datapump DUMPFILE=test_test1.dmp TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y

Full transportable Export/Import

Full transportable Export/Import combines the usability of Data Pump with the speed of transportable tablespaces moving all the system, user(s), and application metadata necessary for a database migration without the need to go over a complex set of steps that was required when executing a traditional transportable tablespace operation. As a result, you will achieve a very fast migration, even for very large volumes of data.

A traditional transportable tablespace can require you to manually move user and application metadata (such as Synonyms, Triggers, Packages, and so on) to allow you to make use of the tablespace datafiles you migrated to the destination database using this technique.

Full transportable Export/Import will take care of all this, and will take advantage of many available options in Data Pump such as the ability to move metadata over a database link and combine it with the transportable tablespace mechanism to move user and application data. As a result, you will be able to accomplish a full database migration using less commands, making it all easier, faster and cleaner.

The full transportable Export exists since the database version 11.2.0.3, but the full transport Import was newly introduced within 12.1.0.1.

Full transport Export/Import allows cross-platform, cross-endian migration with the use of RMAN CONVERT of the datafiles, and also allows you to transfer dump files over NETWORK_LINK.

Exporting views as tables

Another new feature introduced to Data Pump is the option to export a view as a table. In this case, Data Pump will include in the dump file the corresponding table definition and all the data that was visible in the view, instead of only writing the view definition. This allows the Data Pump import to create it as a table with the same columns and data as the original view during the import process. All the objects, depending on the view, will be also exported as they were defined on the table. Grants and constraints that are associated with the view will now be recorded as grants and constraints on the corresponding table in the dump file.

This new ability to export views as tables can be used for:

  • Exporting data subsets for development and testing
  • Data Replication
  • Exporting a denormalized dataset spanning multiple related tables (when moving data from an online transaction processing ( OLTP ) system to a Data Warehouse system)
  • Offline archival purposes

    Here is one simple example of how to export a view as table:

    $ expdp test/test DIRECTORY=datapump DUMPFILE=testvwasatable.dmp VIEWS_AS_TABLES=employees_v

Extended character data types

This new version of the Oracle database extends the maximum size of the VARCHAR2 field, NVARCHAR2 field from 4000 bytes to 32,767 bytes and the row data type from 2000 bytes to 32,767 bytes. Columns with a data value of 4000 bytes or less will be stored inline. Only when they exceed 4000 bytes (called extended character data type columns) will they go out-of-line (they have enabled storage in the row in effect), leveraging the Oracle Large Object (LOB) technology. Data Pump utilities and the related packages DBMS_DATAPUMP and DBMS_METADATA (PL/SQL) are now modified to support the extended data types.

Your COMPATIBLE parameter in SPFILE should be 12.0 or greater and the MAX_SQL_STRING_SIZE parameter should be set to EXTENDED to allow Data Pump to support the extended character data types.

Encryption password

Previously, Data Pump allowed us to generate dump files with a password-based encryption key in the command line, making this easily to be hacked. Oracle has now enhanced this option adding to Data Pump the ability to prompt the user to add the encryption password without the value being echoed as it is entered. The new command-line parameter ENCRYPTION_PWD_PROMPT=Y can be now used silently at run-time, making it not visible by commands such as ps and the password will not be stored in the scripts.

The concurrent use of the ENCRYPTION_PASSWORD and ENCRYPTION_PWD_PROMPT parameters is prohibited and will generate an error.

Compressing tables on Import

In earlier versions, Data Pump always performed an import operation using the same compression settings that were present during the export process. Data Pump is now enhanced to allow you to specify a compression method at import time regardless of the compression method used when the data was exported, or if you prefer, you can also now decompress tables during an import.

This new feature is now possible due to the introduction of a new metadata TRANSFORM parameter called TABLE_COMPRESSION_CLAUSE. If set to NONE, the table compression clause is ignored and it will be imported using the default compression mode for the tablespace, or you can use any valid table compression clause available such as: NOCOMPRESS, COMPRESS BASIC, COMPRESS FOR OLTP, COMPRESS FOR QUERY, or COMPRESS FOR ARCHIVE.

If the table compression clause has more than one word, then you must use single or double quotation marks.

Here you can see an example of using compression on import:

$ impdp test/test DIRECTORY=datapump DUMPFILE=test.dmp TRANSFORM=TABLE_COMPRESSION_CLAUSE:'COMPRESS FOR OLTP'

Exporting data from the data vault

If you are using Data Pump Export without encryption on a database where database vault is enabled, Data Pump will now generate a warning message to ensure that users are not exporting sensible data without using encryption by mistake. This is only a warning message and you can choose between continuing with the export process or to stop it and restart it enabling compression.

Creating SecureFile LOBs on Import

You are now enabled to specify a LOB storage method to be used during a Data Pump Import process regardless of the settings in the source database when the Data Pump Export was made. This provides a straightforward method that allows users to easily migrate from the BasicFile LOBs to SecureFile LOBs.

Once again, this new feature is only possible due to the introduction of a new metadata TRANSFORM parameter called LOB_STORAGE (as you can see, Oracle has introduced many enhancements to the TRANSFORM parameter on Data Pump).

Now let's take a look at an example of how to use this new TRANSFORM parameter in real life:

$ impdp test/test DIRECTORY=datapump DUMPFILE=test.dmp TRANSFORM=LOB_STORAGE:SECUREFILE

When setting the parameter to SECUREFILE or BASICFILE, you are specifying the LOB storage method that will be applied to all the LOBs during the import session. If default is set, then the LOB clause is omitted and all the imported tables will use the DEFAULT mode associated with the destination tablespace. Finally, if you set the parameter to NO_CHANGE, the import process will use the LOB storage clauses that were created in the export dump file.

Auditing Data Pump commands

Oracle Database 12c introduced a unified and extensible audit framework that unifies various audit trails from different database components and is extensible to accommodate additional audit types and audit data fields. Data Pump utilities were extended to leverage the new audit framework, providing a comprehensive audit trail of all the invocations of Data Pump.

Here we can see some examples of the use of this new audit feature:

SQL > CREATE AUDIT POLICY datapump 2 ACTIONS COMPONENT=DATAPUMP EXPORT; SQL> AUDIT POLICY datapump; SQL> ALTER AUDIT POLICY datapump 2 ADD ACTION COMPONENT=DATAPUMP IMPORT; SQL> DROP AUDIT POLICY datapump;

Summary

This article provided a sneak peak at some of the great new features and enhancements introduced with Oracle 12c. There were very high level overviews of each feature included in the article.

Resources for Article:


Further resources on this subject:


Oracle Database 12c Backup and Recovery Survival Guide A comprehensive guide for every DBA to learn recovery and backup solutions with this book and ebook
Published: September 2013
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

About the Author :


Aman Sharma

Aman Sharma is an Oracle Database consultant and instructor. He holds a Master's degree in Computer Applications and has been working with Oracle Database for over a decade. His main focus is to understand how Oracle Database works internally. Besides the core database, he has a very strong knowledge of Linux, Solaris, Oracle RAC, Data Guard, RMAN, Oracle Exadata and Oracle Enterprise Manager.

He is an Oracle Certified Professional and also an Oracle Certified Expert in Oracle RAC, SQL and Linux. He is also an Oracle Certified Implementation Specialist and a certified Solaris System Administrator. He loves to share his learning with the Oracle community, so when he is not delivering an Oracle= related session, he spends a lot of his time participating in OTN (Oracle Technology Network) discussion forums. He also maintains an Oracle blog (http://blog.aristadba.com), is also on Twitter (@amansharma81), you can reach to him using either ways. He is a strong supporter of user groups and is a member of India Oracle Users Group (http://www.aioug.org) and has been a speaker at various Oracle conferences organized by AIOUG. He is also an associate member of IOUG (http://www.ioug.org). In 2010, he was awarded the prestigious Oracle ACE award from Oracle Corporation.

Francisco Munoz Alvarez

Francisco Munoz Alvarez has over two decades of experience in consulting, analysis, support, implementation, and migration of Oracle products. He is also an expert in most phases of a database life cycle, for example, development, stabilization, security, backup and recovery, tuning, installations, and data warehouse (ETL) with excellent implementation and support methodologies. He is a popular speaker at many Oracle conferences around the world.

He is also the President of CLOUG (Chilean Oracle Users Group), LAOUC (Latin American Oracle Users Group Community, which is the umbrella organization for all of Latin America), and NZOUG (New Zealand Oracle Users Group). He worked as an Associate Technologist for Oracle in Brazil and Chile. He was an Oracle instructor for the New Horizons Centre (Chile) and for Oracle Education (Brazil and Chile). He also worked in the first team to introduce Oracle to South America (Oracle 6 and the beta version of Oracle 7). He was also the first Master Oracle 7 Database Administrator in South America, as well as the first Latin American Oracle professional to be awarded a double ACE (ACE in 2008 and ACE Director in 2009) by Oracle HQ. In 2010, he had the privilege to receive a prestigious Oracle Magazine Editor's Choice Award as the Oracle Evangelist of the Year—a huge recognition for his outstanding achievements in the Oracle world that includes the creation and organization of the already famous OTN Tours that are the biggest Oracle evangelist events in the world.

Currently, Francisco works for Revera Limited, which is a leading provider of utility computing infrastructure and enterprise data management in New Zealand, as the Oracle Professional Services Manager since June 2011. He also maintains an Oracle blog (http://www.oraclenz.org) and you can always contact him through this or Twitter (@fcomunoz) regarding any questions about this book.

Books From Packt


Oracle WebLogic Server 12c Advanced Administration Cookbook
Oracle WebLogic Server 12c Advanced Administration Cookbook

Oracle Enterprise Manager 12c Administration Cookbook
Oracle Enterprise Manager 12c Administration Cookbook

 Getting Started with Oracle WebLogic Server 12c: Developer’s Guide
Getting Started with Oracle WebLogic Server 12c: Developer’s Guide

Oracle WebLogic Server 12c: First Look
Oracle WebLogic Server 12c: First Look

Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos
Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos

Securing WebLogic Server 12c [Instant]
Securing WebLogic Server 12c [Instant]

 Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial
Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial

Oracle SOA BPEL Process Manager 11gR1 – A Hands-on Tutorial
Oracle SOA BPEL Process Manager 11gR1 – A Hands-on Tutorial


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
1
j
5
A
A
U
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software