Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Oracle Database 11gR2 Performance Tuning Cookbook

You're reading from  Oracle Database 11gR2 Performance Tuning Cookbook

Product type Book
Published in Jan 2012
Publisher Packt
ISBN-13 9781849682602
Pages 542 pages
Edition 1st Edition
Languages
Author (1):
Ciro Fiorillo Ciro Fiorillo
Profile icon Ciro Fiorillo

Table of Contents (21) Chapters

Oracle Database 11gR2 Performance Tuning Cookbook
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Starting with Performance Tuning Optimizing Application Design Optimizing Storage Structures Optimizing SQL Code Optimizing Sort Operations Optimizing PL/SQL Code Improving the Oracle Optimizer Other Optimizations Tuning Memory Tuning I/O Tuning Contention Dynamic Performance Views A Summary of Oracle Packages Used for Performance Tuning Index

Chapter 8. Other Optimizations

In this chapter, we will cover:

  • Caching results with the client-side result cache

  • Enabling parallel SQL

  • Direct path inserting

  • Using create table as select

  • Inspecting indexes and triggers overhead

  • Loading data with SQL*Loader and Data Pump

Introduction


In this chapter we will look at optimizations related to both queries and DML operations.

The first two recipes will show some features that can speed up a query; the following four recipes are focused on different techniques useful to load data in the database, from external sources and from other tables inside the database.

We will also focus on the overhead introduced by indexes and triggers. We have seen in past recipes that using indexes can speed up our queries, resulting in faster execution. However, in the recipes of this chapter, we will see how over-indexing a table can lead to poor DML performance.

About loading data, we will see how to use the SQL Loader and Data Pump to load our data faster. Direct path inserting and creating table using select will help us to populate some tables using data already available in the database.

Caching results with the client-side result cache


In this recipe we will see how to enable and use the client-side result cache to reach significant improvement in repeatedly executing complex queries.

Getting ready

To enable the client result cache feature in a database, we need to alter the CLIENT_RESULT_CACHE_SIZE parameter and set a size for caching larger than zero (the default value). This parameter specifies the size in bytes used by all the client processes as the maximum size of the client per-process result set cache.

To do so we need to execute the following commands, querying for the actual value of the parameter:

CONNNECT / AS SYSDBA
SHOW PARAMETER CLIENT_RESULT_CACHE_SIZE

If we need to alter the size, because it is set to zero, or if we want to change the actual size—we can use the following commands. Once we set the size for the client result cache to 5 MB, we restart the instance to enable the modifications:

ALTER SYSTEM SET CLIENT_RESULT_CACHE_SIZE=5M SCOPE=SPFILE;
SHUTDOWN...

Enabling parallel SQL


In the recent past, we have seen the passage from the megahertz era to the multi-core era in microprocessor design. Even laptops and small devices have multi-core CPUs available that can take advantage of applications accomplishing work in parallel.

In this recipe we will see how to enable parallel execution of queries to speed them up.

Getting ready

To observe a performance gain in this recipe, we obviously need a machine with a minimum of two cores. A single-core machine, using parallel SQL execution leads to a dip in performance when compared to using normal sequential execution.

How to do it...

In this recipe, we will use a query that returns about 1 million records. To avoid displaying this huge amount of data in our terminal, we will need to copy the code in a SQL script file, naming it TEST.SQL, for example, and then execute it using the @ operator from SQL*Plus:

@TEST.SQL

For clarity, the content of the script is split into the following steps:

  1. Connect to the database...

Direct path inserting


In this recipe we will see how to insert many rows in a table using a particular INSERT statement to boost performance.

How to do it...

The following steps demonstrates multiple row insertions in the same INSERT statement:

  1. Connect to the SH schema:

    CONNECT sh@TESTDB/sh
    
  2. Create an empty table MY_SALES with the SALES table structure:

    CREATE TABLE MY_SALES AS SELECT * FROM SALES WHERE ROWNUM < 1;
    
  3. Insert all the rows from SALES table in the newly-created table:

    SET TIMING ON
    INSERT INTO MY_SALES SELECT * FROM SALES;
    COMMIT;
    SET TIMING OFF
    
  4. Empty the MY_SALES table:

    TRUNCATE TABLE MY_SALES;
    
  5. Insert all of the rows from the SALES table in the newly-created table using direct path inserting:

    SET TIMING ON
    INSERT /*+ APPEND */ INTO MY_SALES SELECT * FROM SALES;
    COMMIT;
    SET TIMING OFF
    
  6. Show the execution plan for classical INSERT statement:

    EXPLAIN PLAN FOR INSERT INTO MY_SALES SELECT * FROM SALES;
    SELECT * FROM TABLE(
      DBMS_XPLAN.DISPLAY(null,null,'TYPICAL -BYTES'));
    
  7. Show the...

Using create table as select


In this recipe we will see how to create a table as the result of a selection from other tables or views in the database.

How to do it...

The following steps demonstrate how to use use selection to create a table:

  1. Connect to the SH schema:

    CONNECT sh@TESTDB/sh
    
  2. Create the empty table MY_SALES, and copy the SALES table structure:

    CREATE TABLE MY_SALES AS SELECT * FROM SALES WHERE ROWNUM < 1;
    
  3. Insert all the rows from the SALES table into MY_SALES using direct path inserting :

    SET TIMING ON
    INSERT /*+ APPEND */ INTO MY_SALES SELECT * FROM SALES;
    SET TIMING OFF
    
  4. Drop the MY_SALES table:

    DROP TABLE MY_SALES;
    
  5. Create table MY_SALES as a selection from SALES table:

    SET TIMING ON
    CREATE TABLE MY_SALES AS SELECT * FROM SALES;
    SET TIMING OFF
    
  6. Inspect the execution plan for the INSERT statement in step 3:

    EXPLAIN PLAN FOR
     INSERT /*+ APPEND */ INTO MY_SALES
     SELECT * FROM SALES;
    SELECT * FROM TABLE(
     DBMS_XPLAN.DISPLAY(null,null,'TYPICAL -BYTES'));
    
  7. Drop the MY_SALES table:

    ...

Inspecting indexes and triggers overhead


In this recipe we will see the overhead introduced by indexes and triggers on DML operations. We will explore alternative ways to implement calculated fields using virtual columns instead of triggers.

How to do it...

The following steps will demonstrate the index and trigger overheads:

  1. Connect to the SH schema:

    CONNECT sh@TESTDB/sh
    
  2. Create an empty table MY_CUSTOMERS, copying the CUSTOMERS table structure:

    CREATE TABLE MY_CUSTOMERS AS
      SELECT * FROM CUSTOMERS WHERE ROWNUM < 1;
    
  3. nsert all of the records from CUSTOMERS to MY_CUSTOMERS, measuring time:

    SET TIMING ON
    INSERT INTO MY_CUSTOMERS SELECT * FROM CUSTOMERS;
    SET TIMING OFF
    
  4. Truncate the MY_CUSTOMERS table:

    TRUNCATE TABLE MY_CUSTOMERS;
    
  5. Add a unique index and three B-tree indexes on the MY_CUSTOMERS table:

    CREATE UNIQUE INDEX IX1_MY_CUSTOMERS
     ON MY_CUSTOMERS (CUST_ID);
    CREATE INDEX IX2_MY_CUSTOMERS
     ON MY_CUSTOMERS (CUST_LAST_NAME, CUST_FIRST_NAME);
    CREATE INDEX IX3_MY_CUSTOMERS
     ON MY_CUSTOMERS...

Loading data with SQL*Loader and Data Pump


In this recipe we will see how to load data from text files in the Oracle database using two different tools—SQL*Loader and External Tables. We will also see how to use the Data Pump to transfer data between different Oracle databases.

Getting ready

In order to test the functionalities provided by the SQL*Loader, we need some text files containing data. The text used in these recipes has the United States census participation rates data freely available at the following site:

http://2010.census.gov/cgi-bin/staterates.cgi

The direct link to the file used in the recipe is at:

http://2010.census.gov/2010census/take10map/downloads/participationrates2010.txt

To test the recipe, download the file from the above link and save it to a directory accessible from the database server. In the recipe code we will use /oracle/home/ as the path for the file.

In the same directory, create a text file, using your preferred text editor, naming it loaddata.ldr, and copy the...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Oracle Database 11gR2 Performance Tuning Cookbook
Published in: Jan 2012 Publisher: Packt ISBN-13: 9781849682602
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}