In this chapter we will look at optimizations related to both queries and DML operations.
The first two recipes will show some features that can speed up a query; the following four recipes are focused on different techniques useful to load data in the database, from external sources and from other tables inside the database.
We will also focus on the overhead introduced by indexes and triggers. We have seen in past recipes that using indexes can speed up our queries, resulting in faster execution. However, in the recipes of this chapter, we will see how over-indexing a table can lead to poor DML performance.
About loading data, we will see how to use the SQL Loader and Data Pump to load our data faster. Direct path inserting and creating table using select will help us to populate some tables using data already available in the database.
In this recipe we will see how to enable and use the client-side result cache to reach significant improvement in repeatedly executing complex queries.
To enable the client result cache feature in a database, we need to alter the CLIENT_RESULT_CACHE_SIZE
parameter and set a size for caching larger than zero (the default value). This parameter specifies the size in bytes used by all the client processes as the maximum size of the client per-process result set cache.
To do so we need to execute the following commands, querying for the actual value of the parameter:
CONNNECT / AS SYSDBA SHOW PARAMETER CLIENT_RESULT_CACHE_SIZE
If we need to alter the size, because it is set to zero, or if we want to change the actual size—we can use the following commands. Once we set the size for the client result cache to 5 MB, we restart the instance to enable the modifications:
ALTER SYSTEM SET CLIENT_RESULT_CACHE_SIZE=5M SCOPE=SPFILE; SHUTDOWN...
In the recent past, we have seen the passage from the megahertz era to the multi-core era in microprocessor design. Even laptops and small devices have multi-core CPUs available that can take advantage of applications accomplishing work in parallel.
In this recipe we will see how to enable parallel execution of queries to speed them up.
To observe a performance gain in this recipe, we obviously need a machine with a minimum of two cores. A single-core machine, using parallel SQL execution leads to a dip in performance when compared to using normal sequential execution.
In this recipe, we will use a query that returns about 1 million records. To avoid displaying this huge amount of data in our terminal, we will need to copy the code in a SQL script file, naming it TEST.SQL
, for example, and then execute it using the @
operator from SQL*Plus:
@TEST.SQL
For clarity, the content of the script is split into the following steps:
Connect to the database...
In this recipe we will see how to insert many rows in a table using a particular INSERT
statement to boost performance.
The following steps demonstrates multiple row insertions in the same INSERT
statement:
Connect to the SH
schema:
CONNECT sh@TESTDB/sh
Create an empty table MY_SALES
with the SALES
table structure:
CREATE TABLE MY_SALES AS SELECT * FROM SALES WHERE ROWNUM < 1;
Insert all the rows from SALES
table in the newly-created table:
SET TIMING ON INSERT INTO MY_SALES SELECT * FROM SALES; COMMIT; SET TIMING OFF
Empty the MY_SALES
table:
TRUNCATE TABLE MY_SALES;
Insert all of the rows from the SALES
table in the newly-created table using direct path inserting:
SET TIMING ON INSERT /*+ APPEND */ INTO MY_SALES SELECT * FROM SALES; COMMIT; SET TIMING OFF
Show the execution plan for classical INSERT
statement:
EXPLAIN PLAN FOR INSERT INTO MY_SALES SELECT * FROM SALES; SELECT * FROM TABLE( DBMS_XPLAN.DISPLAY(null,null,'TYPICAL -BYTES'));
Show the...
In this recipe we will see how to create a table as the result of a selection from other tables or views in the database.
The following steps demonstrate how to use use selection to create a table:
Connect to the SH
schema:
CONNECT sh@TESTDB/sh
Create the empty table MY_SALES
, and copy the SALES
table structure:
CREATE TABLE MY_SALES AS SELECT * FROM SALES WHERE ROWNUM < 1;
Insert all the rows from the SALES
table into MY_SALES
using direct path inserting :
SET TIMING ON INSERT /*+ APPEND */ INTO MY_SALES SELECT * FROM SALES; SET TIMING OFF
Drop the MY_SALES
table:
DROP TABLE MY_SALES;
Create table MY_SALES
as a selection from SALES
table:
SET TIMING ON CREATE TABLE MY_SALES AS SELECT * FROM SALES; SET TIMING OFF
Inspect the execution plan for the INSERT
statement in step 3:
EXPLAIN PLAN FOR INSERT /*+ APPEND */ INTO MY_SALES SELECT * FROM SALES; SELECT * FROM TABLE( DBMS_XPLAN.DISPLAY(null,null,'TYPICAL -BYTES'));
Drop the MY_SALES
table:
...
In this recipe we will see the overhead introduced by indexes and triggers on DML operations. We will explore alternative ways to implement calculated fields using virtual columns instead of triggers.
The following steps will demonstrate the index and trigger overheads:
Connect to the SH
schema:
CONNECT sh@TESTDB/sh
Create an empty table MY_CUSTOMERS
, copying the CUSTOMERS
table structure:
CREATE TABLE MY_CUSTOMERS AS SELECT * FROM CUSTOMERS WHERE ROWNUM < 1;
nsert all of the records from CUSTOMERS
to MY_CUSTOMERS
, measuring time:
SET TIMING ON INSERT INTO MY_CUSTOMERS SELECT * FROM CUSTOMERS; SET TIMING OFF
Truncate the MY_CUSTOMERS
table:
TRUNCATE TABLE MY_CUSTOMERS;
Add a unique index and three B-tree indexes on the MY_CUSTOMERS
table:
CREATE UNIQUE INDEX IX1_MY_CUSTOMERS ON MY_CUSTOMERS (CUST_ID); CREATE INDEX IX2_MY_CUSTOMERS ON MY_CUSTOMERS (CUST_LAST_NAME, CUST_FIRST_NAME); CREATE INDEX IX3_MY_CUSTOMERS ON MY_CUSTOMERS...
In this recipe we will see how to load data from text files in the Oracle database using two different tools—SQL*Loader and External Tables. We will also see how to use the Data Pump to transfer data between different Oracle databases.
In order to test the functionalities provided by the SQL*Loader, we need some text files containing data. The text used in these recipes has the United States census participation rates data freely available at the following site:
http://2010.census.gov/cgi-bin/staterates.cgi
The direct link to the file used in the recipe is at:
http://2010.census.gov/2010census/take10map/downloads/participationrates2010.txt
To test the recipe, download the file from the above link and save it to a directory accessible from the database server. In the recipe code we will use /oracle/home/
as the path for the file.
In the same directory, create a text file, using your preferred text editor, naming it loaddata.ldr
, and copy the...