Recovery in PostgreSQL 9

Exclusive offer: get 50% off this eBook here
PostgreSQL 9 Admin Cookbook

PostgreSQL 9 Admin Cookbook — Save 50%

Over 80 recipes to help you run an efficient PostgreSQL 9.0 database

£18.99    £9.50
by Simon Riggs | October 2010 | Open Source

The previous article, Backup in PostgreSQL 9, showed us various backup options.

In this article by Simon Riggs, author of PostgreSQL 9 Administration Cookbook, we will cover the following:

  • Recovery of all databases
  • Recovery to a point in time
  • Recovery of a dropped/damaged table
  • Recovery of a dropped/damaged database
  • Recovery of a dropped/damaged tablespace
  • Improving performance of backup/recovery
  • Incremental/Differential backup and restore

 

PostgreSQL 9 Admin Cookbook

PostgreSQL 9 Admin Cookbook

Over 80 recipes to help you run an efficient PostgreSQL 9.0 database

  • Administer and maintain a healthy database
  • Monitor your database ensuring that it performs as quickly as possible
  • Tips for backup and recovery of your database
        Read more about this book      

(For more resources on PostgreSQL, see here.)

Recovery of all databases

Recovery of a complete database server, including all of its databases, is an important feature. This recipe covers how to do that in the simplest way possible.

Getting ready

Find a suitable server on which to perform the restore.

Before you recover onto a live server, always take another backup. Whatever problem you thought you had could be just about to get worse.

How to do it...

LOGICAL (from custom dump -F c):

  • Restore of all databases means simply restoring each individual database from each dump you took. Confirm you have the correct backup before you restore:
    pg_restore --schema-only -v dumpfile | head | grep Started
  • Reload globals from script file as follows:
    psql -f myglobals.sql
  • Reload all databases. Create the databases using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems. Note that there is a separate dumpfile for each database.
    pg_restore -d postgres -j 4 dumpfile

LOGICAL (from script dump created by pg_dump –F p):

As above, though with this command to execute the script. This can be executed remotely without needing to transfer dumpfile between systems.

  • Confirm you have the correct backup before you restore. If the following command returns nothing, then the file is not timestamped, and you'll have to identify it in a different way:
    head myscriptdump.sql | grep Started
  • Reload globals from script file as follows:
    psql -f myglobals.sql
  • Reload all scripts like the following:
    psql -f myscriptdump.sql

LOGICAL (from script dump created by pg_dumpall):

We need to follow the procedure, which is shown next.

  • Confirm you have the correct backup before you restore. If the following command returns nothing, then the file is not timestamped, and you'll have to identify it in a different way:
    head myscriptdump.sql | grep Started
  • Find a suitable server, or create a new virtual server.
  • Reload script in full
    psql -f myscriptdump.sql

PHYSICAL:

  • Restore the backup file onto the target server.
  • Extract the backup file into the new data directory.
  • Confirm that you have the correct backup before you restore.
    $ cat backup_label
    START WAL LOCATION: 0/12000020 (file 000000010000000000000012)
    CHECKPOINT LOCATION: 0/12000058
    START TIME: 2010-06-03 19:53:23 BST
    LABEL: standalone
  • Check all file permissions and ownerships are correct and links are valid. That should already be the case if you are using the postgres userid everywhere, which is recommended.
  • Start the server

That procedure is so simple. That also helps us understand that we need both a base backup and the appropriate WAL files.

If you used other techniques, then we need to step through the tasks to make sure we cover everything required as follows:

  • Shutdown any server running in the data directory.
  • Restore the backup so that any files in the data directory that have matching names are replaced with the version from the backup. (The manual says delete all files and then restore backup—that might be a lot slower than running an rsync between your backup and the destination without the –-update option). Remember that this step can be performed in parallel to speed things up, though it is up to you to script that.
  • Check that all file permissions and ownerships are correct and links are valid. That should already be the case if you are using the postgres userid everywhere, which is recommended.
  • Remove any files in pg_xlog/.
  • Copy in any latest WAL files from a running server, if any.
  • Add in a recovery.conf and set its file permissions correctly also.
  • Start the server.

The only part that requires some thought and checking is which parameters you select for the recovery.conf. There's only one that matters here, and that is the restore_command.

restore_command tells us how to restore archived WAL files. It needs to be the command that will be executed to bring back WAL files from the archive.

If you are forward-thinking, there'll be a README.backup file for you to read to find out how to set the restore_command. If not, then presumably you've got the location of the WAL files you've been saving written down somewhere.

Say, for example, that your files are being saved to a directory named /backups/pg/servername/archive, owned by the postgres user.

On a remote server named backup1, we would then write this all on one line of the recovery.conf as follows:

restore_command = 'scp backup1:/backups/pg/servername/archive/%f %p'

How it works...

PostgreSQL is designed to require very minimal information to perform a recovery. We try hard to wrap all the details up for you.

  • Logical recovery: Logical recovery executes SQL to re-create the database objects. If performance is an issue, look at the recipe on recovery performance.
  • Physical recovery: Physical recovery re-applies data changes at the block level so tends to be much faster than logical recovery. Physical recovery requires both a base backup and a set of archived WAL files.

There is a file named backup_label in the data directory of the base backup. This tells us to retrieve a .backup file from the archive that contains the start and stop WAL locations of the base backup. Recovery then starts to apply changes from the starting WAL location, and must proceed as far as the stop address for the backup to be valid.

After recovery completes, the recovery.conf file is renamed to recovery.done to prevent the server from re-entering recovery.

The server log records each WAL file restored from the archive, so you can check progress and rate of recovery. You can query the archive to find out the name of the latest archived WAL file to allow you to calculate how many files to go.

The restore_command should return 0 if a file has been restored and non-zero for failure cases. Recovery will proceed until there is no next WAL file, so there will eventually be an error recorded in the logs.

If you have lost some of the WAL files, or they are damaged, then recovery will stop at that point. No further changes after that will be applied, and you will likely lose those changes; that would be the time to call your support vendor.

There's more...

You can start and stop the server once recovery has started without any problem. It will not interfere with the recovery.

You can connect to the database server while it is recovering and run queries, if that is useful. That is known as Hot Standby mode.

Recovery to a point in time

If your database suffers a problem at 15:22 p.m. and yet your backup was taken at 04:00 a.m. you're probably hoping there is a way to recover the changes made between those two times. What you need is known as "point-in-time recovery".

Regrettably, if you've made a backup with pg_dump at 04:00 a.m. then you won't be able to recover to any other time than 04:00. As a result, the term point-in-time recovery (PITR) has become synonymous with the physical backup and restore technique in PostgreSQL.

Getting ready

If you have a backup made with pg_dump, then give up all hope of using that as a starting point for a point in time recovery. It's a frequently asked question, but the answer is still "no"; the reason it gets asked is exactly why I'm pleading with you to plan your backups ahead of time.

First, you need to decide what the point of time is that to which you would like to recover. If the answer is "as late as possible", then you don't need to do a PITR at all, just recover until end of logs.

How to do it...

How do you decide to what point to recover? The point where we stop recovery is known as the "recovery target". The most straightforward way is to do this based upon a timestamp.

In the recovery.conf, you can add (or uncomment) a line that says the following:

recovery_target_time = '2010-06-01 16:59:14.27452+01'

or similar. Note that you need to be careful to specify the time zone of the target, so that it matches the time zone of the server that wrote the log. That might differ from the time zone of the current server, so check.

After that, you can check progress during a recovery by running queries in Hot Standby mode.

How it works...

Recovery works by applying individual WAL records. These correspond to individual block changes, so there are many WAL records to each transaction. The final part of any successful transaction is a commit WAL record, though there are abort records as well. Each transaction completion record has a timestamp on it that allows us to decide whether to stop at that point or not.

You can also define a recovery target using a transaction id (xid), though finding out which xid to use is somewhat difficult, and you may need to refer to external records if they exist.

The recovery target is specified in the recovery.conf and cannot change while the server is running. If you want to change the recovery target, you can shutdown the server, edit the recovery.conf, and then restart the server. Be careful though, if you change the recovery target and recovery is already passed the point, it can lead to errors. If you define a recovery_target_timestamp that has already passed, then recovery will stop almost immediately, though this will be later than the correct stopping point. If you define a recovery_target_xid that has already passed, then recovery will just continue to the end of logs. Restarting recovery from the beginning using a fresh restore of the base backup is always safe.

Once a server completes recovery, it will assign a new "timeline". Once a server is fully available, we can write new changes to the database. Those changes might differ from changes we made in a previous "future history" of the database. So we differentiate between alternate futures using different timelines. If we need to go back and run recovery again, we can create a new server history using the original or subsequent timelines. The best way to think about this is that it is exactly like a Sci-Fi novel—you can't change the past but you can return to an earlier time and take a different action instead. But you'll need to be careful not to confuse yourself.

There's more...

pg_dump cannot be used as a base backup for a PITR. The reason is that a log replay contains the physical changes to data blocks, not logical changes based upon Primary Keys. If you reload a pg_dump the data will likely go back into different data blocks, so the changes wouldn't correctly reference the data.

WAL doesn't contain enough information to reconstruct all SQL fully that produced those changes. Later feature additions to PostgreSQL may add the required information to WAL.

See also

Planned in 9.1 is the ability to pause/resume/stop recovery, and to set recovery targets while the server is up dynamically. This will allow you to use the Hot Standby facility to locate the correct stopping point more easily.

You can trick Hot Standby into stopping recovery, which may help.

Recovery of a dropped/damaged table

You may drop or even damage a table in some way. Tables could be damaged for physical reasons, such as disk corruption, or they could also be damaged by running poorly specified UPDATEs/DELETEs, which update too many rows or overwrite critical data.

It's a common request to recover from this situation from a backup.

How to do it...

The methods differ, depending upon the type of backup you have available. If you have multiple types of backup, you have a choice.

LOGICAL (from custom dump -F c):

If you've taken a logical backup using pg_dump into a custom file, then you can simply extract the table you want from the dumpfile like the following:

pg_restore -t mydroppedtable dumpfile | psql

or connect direct to the database using –d.

The preceding command tries to re-create the table and then load data into it. Note that pg_restore -t option does not dump out any of the indexes on the table selected. That means we need a slightly more complex procedure than it would first appear, and the procedure needs to vary depending upon whether we are repairing a damaged table or putting back a dropped table.

To repair a damaged table we want to replace the data in the table in a single transaction. There isn't a specific option to do this, so we need to do the following:

  • Dump the table to a script file as follows:
    pg_restore -t mydroppedtable dumpfile > mydroppedtable.sql
  • Edit a script named restore_mydroppedtable.sql with the following code:
    BEGIN;
    TRUNCATE mydroppedtable;
    \i mydroppedtable.sql
    COMMIT;
  • Then, run it using the following:
    psql -f restore_mydroppedtable.sql
  • If you've dropped a table then you need to:
    • Create a new database in which to work, name it restorework, as follows:
      CREATE DATABASE restorework;
    • Restore the complete schema to the new database as follows:
      pg_restore --schema-only -d restorework dumpfile
  • Now, dump just the definitions of the dropped table into a new file, which will contain CREATE TABLE, indexes, other constraints and grants. Note that this database has no data in it, so specifying –-schema-only is optional, as follows:
    pg_dump -t mydroppedtable --schema-only restorework >
    mydroppedtable.sql
  • Now, recreate the table on the main database as follows:
    psql -f mydroppedtable.sql
  • Now, reload just the data into database maindb as follows
    pg_restore -t mydroppedtable --data-only -d maindb dumpfile

If you've got a very large table, then the fourth step can be a problem, because it builds the indexes as well. If you want you can manually edit the script into two pieces, one before the load ("pre-load") and one after the load ("post-load"). There are some ideas for that at the end of the recipe.

LOGICAL (from script dump):

The easy way to restore a single table from a script is as follows:

  • Find a suitable server, or create a new virtual server.
  • Reload the script in full, as follows:
    psql -f myscriptdump.sql
  • From the recovered database server, dump the table, its data, and all the definitions of the dropped table into a new file as follows:
    pg_dump -t mydroppedtable -F c mydatabase > dumpfile
  • Now, recreate the table into the original server and database, using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems.
    pg_restore -d mydatabase -j 2 dumpfile

The only way to extract a single table from a script dump without doing all of the preceding is to write a custom Perl script to read and extract just the parts of the file you want. That can be complicated, because you may need certain SET commands at the top of the file, the table, and data in the middle of the file, and the indexes and constraints on the table are near the end of the file. It's complex; the safe route is the one already mentioned.

PHYSICAL:

To recover a single table from a physical backup, we need to:

  • Find a suitable server, or create a new virtual server.
  • Recover the database server in full, as described in previous recipes on physical recovery, including all databases and all tables. You may wish to stop at a useful point in time.
  • From the recovered database server, dump the table, its data, and all the definitions of the dropped table into a new file as follows:
    pg_dump -t mydroppedtable -F c mydatabase > dumpfile
  • Now, recreate the table into the original server and database using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems as follows:
    pg_restore -d mydatabase -j 2 dumpfile

How it works...

At present, there's no way to restore a single table from a physical restore in just a single step.

See also

Splitting a pg_dump into multiple sections, "pre" and "post" was proposed by me for an earlier release of PostgreSQL, though I haven't had time to complete that yet. It's possible to do that using an external utility also; the best script I've seen to split a dump file into two pieces is available at the following website:

http://bucardo.org/wiki/split_postgres_dump

PostgreSQL 9 Admin Cookbook Over 80 recipes to help you run an efficient PostgreSQL 9.0 database
Published: October 2010
eBook Price: £18.99
Book Price: £30.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on PostgreSQL, see here.)

Recovery of a dropped/damaged tablespace

Recovering a complete tablespace is also sometimes required. It's actually a lot easier than recovering a single table.

How to do it...

The methods differ depending upon the type of backup you have available. If you have multiple types of backup, you have a choice.

LOGICAL (from custom dump -F c):

If you've taken a logical backup using pg_dump into a custom file, then you can simply extract the tables you want from the dumpfile, like the following:

pg_restore -t mytab1 -t mytab2 … dumpfile | psql

or connect direct to the database using –d.

Of course, you may have difficulty remembering which exact tables were there. So, you may need to proceed, like the following:

  • Find a suitable server, or create a new virtual server.
  • Reload the script in full, using four parallel tasks as follows:
    pg_restore -d mydatabase -j 4 dumpfile
  • Once the restore is complete, you can then dump the tables in the tablespace.
  • Now, recreate the tables into the original server and database, using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems as follows:
    pg_restore -d mydatabase -j 2 dumpfile

LOGICAL (from script dump):

There's no easy way to extract the required tables from a script dump.

We need to follow the procedure which is as follows:

  • Find a suitable server, or create a new virtual server.
  • Reload the script in full
    psql -f myscriptdump.sql
  • Once the restore is complete, you can then dump the tables in the tablespace.
  • Now, recreate the tables into the original server and database, using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems like the following:
    pg_restore -d mydatabase -j 2 dumpfile

PHYSICAL:

To recover a single tablespace from a physical backup, we need to:

  • Find a suitable server, or create a new virtual server.
  • Recover database server in full, including all databases and all tables.
  • Once the restore is complete, you can then dump the tables in the tablespace.
  • Now, recreate the tables into the original server and database, using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems like the following:
    pg_restore -d mydatabase -j 2 dumpfile

There's more...

When recovering from a custom backup file (-F c), you can also use the –l option to list out the contents of the archive. You can then edit that file to remove, comment out, or reorder the actions. pg_restore can then reuse the list file as input, using the –L option.

Recovery of a dropped/damaged database

Recovering a complete database is also sometimes required. It's actually a lot easier than recovering a single table. Many users choose to place all their tables in a single database; in that case this recipe isn't relevant.

How to do it...

The methods differ depending upon the type of backup you have available. If you have multiple types of backup, you have a choice.

LOGICAL (from custom dump -F c):

Recreate the database into the original server using parallel tasks to speed things along. This can be executed remotely without needing to transfer dumpfile between systems like the following:

pg_restore -d myfreshdb -j 4 dumpfile

LOGICAL (from script dump created by pg_dump):

Recreate the database into the original server. This can be executed remotely without needing to transfer dumpfile between systems like the following:

psql -f myscriptdump.sql myfreshdb

LOGICAL (from script dump created by pg_dumpall):

There's no easy way to extract the required tables from a script dump.

We need to follow the procedure, which is as follows:

  • Find a suitable server, or create a new virtual server.
  • Reload script in full, as follows:
    psql -f myscriptdump.sql
  • Once the restore is complete, you can then dump the tables in the tablespace.
  • Now recreate the database as described for logical dumps earlier in this recipe.

PHYSICAL:

To recover a single database from a physical backup we need to:

  • Find a suitable server, or create a new virtual server.
  • Recover database server in full, including all databases and all tables.
  • Once the restore is complete, you can then dump the tables in the database.
  • Now, recreate the database as described for logical dumps, earlier in this recipe.

Improving performance of backup/restore

Performance is often a concern in any medium or large database.

Backup performance is often a delicate issue, because the resource usage may need to be limited to within certain boundaries. There may also be a restriction on the maximum run-time for the backup, for example, if the backup runs each Sunday.

Again, restore performance may be more important than backup performance, even if backup is the more obvious concern.

Getting ready

If performance is a concern or is likely to be one, then you should read the recipe about planning first.

How to do it...

  • Physical backup: Improving performance of a physical backup can be done by taking the backup in parallel. That is, copying away the files using more than one task. The more tasks you use, the more it will impact the current system. When backing up, you can skip certain files. You won't need (in order) the following:
    • any files placed there by DBA that shouldn't actually be there
    • any files in pg_xlog
    • any old server log files in pg_log (even the current one)

    Remember, it's safer not to try to exclude files at all, as if you miss something critical you may get data loss. Also remember that your backup speed may be bottlenecked by your disks or your network. Some larger systems have dedicated networks in place purely for backups.

  • Logical backup: As explained in a previous recipe, if you want to backup all databases in a database server, then you should use multiple pg_dump tasks running in parallel. If you want to speed up the dump speed of a pg_dump task, there really isn't an easy way of doing that right now. If you're using compression, look at the notes at the bottom of this recipe.
  • Physical restore: Just as with physical backup, it's possible for us to put everything back quicker if we use parallel restore.
  • Logical restore: Whether you use psql or pg_restore, you can speed up the program by assigning maintenance_work_mem = 128MB or more either in postgresql.conf or on the user that will run the restore. If neither of those ways is easily possible, you can specify the option using the PGOPTIONS environment variable, as follows:
    • export PGOPTIONS ="-c work_mem = 128000"

    This will then be used to set that option value for subsequent connections.

If you are running, archiving, or streaming replication, then transaction log writes may become a problem. Set wal_buffers between 2,000 and 10,000, and set checkpoint_segments to 1024, so it has room to breathe.

If you aren't running archiving or streaming replication, or you can turn it off during the restore, then you'll be able to minimize the amount of transaction log writes. In that case, you may wish to use the ––single-transaction option, as that will also act to improve performance.

If a pg_dump was made using -F (custom format), then we can restore in parallel as follows:

pg_restore -j NumJobs

You'll have to be careful about how you select what degree of parallelism to use. A good starting point is the number of CPUs. Be very careful that you don't overflow available memory when using parallel restore: each job will use up to maintenance_work_mem, so the whole restore could begin swapping when it hits larger indexes later in the restore. Plan out the size of shared_buffers and maintenance_work_mem according to the number of jobs specified.

Whatever you do, make sure you run ANALYZE afterwards on every object created. This will happen automatically if autovacuum is enabled. It often helps to disable autovacuum completely while running a large restore, so double-check that you have it switched on again following the restore. The consequences of skipping this step will be extremely poor performance when you start your application again, which can easily set everybody off in a panic.

How it works...

Physical backup and restore is completely up to you. Copy those files away as fast as you like, anyway you like. Put them back the same or a different way.

Logical backup and restore involves moving data into and out of the database. That's typically going to be slower than physical backup and restore. Particularly with a restore, rebuilding indexes and constraints takes time, even when run in parallel. Plan ahead, measure the performance of your backup and restore techniques, so you have a chance when you need your database back in a hurry.

There's more...

Compressing backups is often considered as a way to reduce the size of the backup for storage. Even mild compression can use large amounts of CPU. In some cases, this might offset network transfer costs, so there isn't any hard rule as to whether compression is always good.

Compression for WAL files from physical backups was discussed earlier: pg_lesslog, available at the following website. http://pgfoundry.org/frs/?group_id=1000310. Physical backups can be compressed in various ways, depending upon the exact backup mechanism used. By default, the custom dump format for logical backups will be compressed. Even when compressed, the objects can be accessed individually if required.

Using -–compress with script dumps will result in a compressed text file, just as if you had dumped the file, and then compressed it. Access to individual tables is not possible.

PostgreSQL utilities do have a compress/decompress option, though this isn't always that efficient. Put another way:

pg_dump --compress=0

will typically be slower than:

pg_dump | gzip

Of course, feel free to use your favorite fast compression tool instead, which is likely to vary, depending upon the type of data in use.

Using multiple processes is known as pipeline parallelism. If you're using physical backup, then you can copy the data away in multiple streams, which also allows you to take advantage of parallel compression/decompression.

See also

If taking a backup is an expensive operation, then one way around that is to take the backup from a replica instead that offloads the cost of the backup operation away from the master.

Incremental/Differential backup and restore

If you have performance problems with backup of a large PostgreSQL database, then you may ask about incremental or differential backup.

An incremental backup is a backup of all files that have changed since the last full backup. In order to restore, you must restore the full backup and then each set of incremental changes.

A differential backup is a backup of all individual changes since the last full backup. Again, restoration requires you to restore the full backup and then apply any changes since then.

How to do it...

To perform a differential physical backup, you can use rsync to compare the existing files against the previous full backup, and then overwrite just changed data blocks. It's a bad plan to overwrite your last backup, so keep two or more copies. An example backup schedule would be as follows:

Day of Week Backup Set 1

Backup Set 2

Sunday New full backup to Set 1 New full backup to Set 2
Monday Differential to Set 1 Differential to Set 2
Tuesday Differential to Set 1 Differential to Set 2

Wednesday

Differential to Set 1 Differential to Set 2
Thursday Differential to Set 1 Differential to Set 2
Friday Differential to Set 1 Differential to Set 2
Saturday Differential to Set 1 Differential to Set 2

 

You should keep at least two full backup sets.

Many large databases have tables that are insert-only. In that case, it's easy to store away parts of those tables. If the tables are partitioned by insertion/creation date or a similar field, it makes doing that much simpler. Either way, you're still going to need a good way of recording what data is where in your backup.

In the general case, there's no easy way to run a differential backup using pg_dump.

How it works...

PostgreSQL doesn't explicitly keep track of last changed date or similar information for a file or table. PostgreSQL tables are held as files, so you should be able to rely on the modification time (mtime) of the files on the filesystem. If, for some reason, you don't trust that or that has been disabled, then incremental backup is not for you.

pg_dump doesn't allow WHERE clauses to be specified, so even if you add your own columns to track last_changed_date you'll still need to perform that manually somehow.

There's more...

http://en.wikipedia.org/wiki/Backup_rotation_scheme gives further useful information.

While thinking about incremental backup, you should note that replication techniques work by continually applying changes onto a full backup. This could be considered a technique for an incremental updated backup, also known as an "incremental forever" backup strategy. The changes are applied ahead of time, so that you can restore easily and quickly. You should still take a backup, but you can take the backup from the replication standby instead.

It's possible to write a utility that makes a differential backup of data blocks. You can read each data block and check the block's Log Sequence Number (LSN) to see if it has changed since a previous copy.

pg_rman is an interesting project, and you can get more information at the following website:

http://code.google.com/p/pg-rman/

pg_rman reads changed data blocks and compresses them, using detailed knowledge of the internals of PostgreSQL data blocks. Any bugs that exist there could cause data loss in your backups. Issues aren't resolved by the main PostgreSQL project, so I personally wouldn't advise using this utility without a formal support contract. Various companies support this; ask them.

pg_rman 1.1.2 will certainly produce smaller backups, though creating those backups is not yet a parallel process. As a result, it can be much faster to use a full or incremental backup with parallel streams.

Summary

In this article we took a look at the following recipes:

  • Recovery of all databases
  • Recovery to a point in time
  • Recovery of a dropped/damaged table
  • Recovery of a dropped/damaged database
  • Recovery of a dropped/damaged tablespace
  • Improving performance of backup/recovery
  • Incremental/Differential backup and restore

Further resources on this subject:


PostgreSQL 9 Admin Cookbook Over 80 recipes to help you run an efficient PostgreSQL 9.0 database
Published: October 2010
eBook Price: £18.99
Book Price: £30.99
See more
Select your format and quantity:

About the Author :


Simon Riggs

Simon Riggs is one of the few Major Developers and Committers on the PostgreSQL database project, and is also CTO of 2ndQuadrant, providing 24x7 support and services to PostgreSQL users worldwide. Simon has worked with enterprise-class database applications for more than 20 years, with prior certifications on Oracle, Teradata and DB2. Simon is responsible for much of the database recovery and replication code in PostgreSQL, and designed or wrote many of the latest performance enhancements. He uses his operational experience to contribute to many aspects of both internal design and usability.

Books From Packt


Mastering phpMyAdmin 3.3.x for Effective MySQL Management
Mastering phpMyAdmin 3.3.x for Effective MySQL Management

MySQL Admin Cookbook
MySQL Admin Cookbook

MySQL 5.1 Plugin Development
MySQL 5.1 Plugin Development

Nginx HTTP Server
Nginx HTTP Server

PostgreSQL 9.0 High Performance
PostgreSQL 9.0 High Performance

YUI 2.8: Learning the Library
YUI 2.8: Learning the Library

OpenCart 1.4 Beginner's Guide
OpenCart 1.4 Beginner's Guide

Learning jQuery 1.3
Learning jQuery 1.3


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software