PostgreSQL Developer's Guide

4.6 (5 reviews total)
By Ibrar Ahmed , Asif Fayyaz , Amjad Shahzad
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Getting Started with PostgreSQL

About this book

PostgreSQL is an enterprise-level database that competes among proprietary database vendors, owing to its remarkable feature set and reliability without the expensive licensing overhead.

This book is a comprehensive and pragmatic guide to developing databases in PostgreSQL. Beginning with a refresher of basic SQL skills, you will gradually be exposed to advanced concepts, such as learning how to program in native PostgreSQL procedural languages, implementing triggers, custom functions, and views. You will learn database optimization techniques such as query optimization and indexing while exploring extensive code examples. Lastly, you will explore foreign data wrappers, implementing extensibility, and improving maintainability.

Publication date:
February 2015
Publisher
Packt
Pages
270
ISBN
9781783989027

 

Chapter 1. Getting Started with PostgreSQL

Before starting our journey with SQL, allow me to quickly go through the history of PostgreSQL. It all starts from the University of California, Berkeley, in the late 1970s with the aim of developing a relational database possessing object-oriented features. They named it Ingres. Later on, around the mid 1980s, a team of core developers led by Michael Stonebraker from the University of California started work on Ingres. The team added core object-oriented features in Ingres and named the new version PostgreSQL.

This team was attached to the development of PostgreSQL for around 8 years. During this time, they introduced object-oriented concepts, procedures, rules, indexes, and types. In 1994, Andrew Yu and Jolly Chen replaced the Ingres-based query language with the SQL query language. After this change, in 1995, PostgreSQL was renamed Postgres95. In 1996, after entering the open source world, Postgres95 went through multiple changes and new features such as Multi Version Concurrency Control (MVCC), and built-in types were added. Over a period of time, following the addition of new features and with the devoted work of developers, Postgres95 achieved consistency and uniformity in code. They finally renamed Postgres95 to PostgreSQL.

PostgreSQL is widely considered to be one of the most stable database servers available today, with multiple features that include:

  • A wide range of built-in types

  • MVCC

  • New SQL enhancements, including foreign keys, primary keys, and constraints

  • Open source code, maintained by a team of developers

  • Trigger and procedure support with multiple procedural languages

  • Extensibility in the sense of adding new data types and the client language

From the early releases of PostgreSQL (from version 6.0 that is), many changes have been made, with each new major version adding new and more advanced features. The current version is PostgreSQL 9.4 and is available from several sources and in various binary formats.

Note

We will be using PostgreSQL 9.4 throughout this book. So, let's download and install it so that we can start our remarkable journey in this book.

In this chapter, you will learn about the following topics:

  • Writing queries using psql

  • Creating, altering, and truncating a table

  • Inserting, updating, and deleting data

  • PostgreSQL-supported data types

  • PostgreSQL-supported operators and usage

  • Constraints and privileges in PostgreSQL

 

Writing queries using psql


Before proceeding, allow me to explain to you that throughout this book, we will use a warehouse database called warehouse_db. In this section, I will show you how you can create such a database, providing you with sample code for assistance. You will need to do the following:

  1. We are assuming here that you have successfully installed PostgreSQL and faced no issues. Now, you will need to connect with the default database that is created by the PostgreSQL installer. To do this, navigate to the default path of installation, which is /opt/PostgreSQL/9.4/bin from your command line, and execute the following command that will prompt for a postgres user password that you provided during the installation:

    /opt/PostgreSQL/9.4/bin$./psql -U postgres
    Password for user postgres:
    
  2. Using the following command, you can log in to the default database with the user postgres and you will be able to see the following on your command line:

    psql (9.4beta1)
    Type "help" for help
    postgres=#
    
  3. You can then create a new database called warehouse_db using the following statement in the terminal:

    postgres=# CREATE DATABASE warehouse_db;
  4. You can then connect with the warehouse_db database using the following command:

    postgres=# \c warehouse_db
  5. You are now connected to the warehouse_db database as the user postgres, and you will have the following warehouse_db shell:

    warehouse_db=#

Let's summarize what we have achieved so far. We are now able to connect with the default database postgres and created a warehouse_db database successfully. It's now time to actually write queries using psql and perform some Data Definition Language (DDL) and Data Manipulation Language (DML) operations, which we will cover in the following sections.

In PostgreSQL, we can have multiple databases. Inside the databases, we can have multiple extensions and schemas. Inside each schema, we can have database objects such as tables, views, sequences, procedures, and functions.

We are first going to create a schema named record and then we will create some tables in this schema. To create a schema named record in the warehouse_db database, use the following statement:

warehouse_db=# CREATE SCHEMA record;
 

Creating, altering, and truncating a table


In this section, we will learn about creating a table, altering the table definition, and truncating the table.

Creating tables

Now, let's perform some DDL operations starting with creating tables. To create a table named warehouse_tbl, execute the following statements:

warehouse_db=# CREATE TABLE warehouse_tbl
  (
  warehouse_id INTEGER NOT NULL,
  warehouse_name TEXT NOT NULL,
  year_created INTEGER,
  street_address TEXT,
  city CHARACTER VARYING(100),
  state CHARACTER VARYING(2),
  zip CHARACTER VARYING(10),
  CONSTRAINT "PRIM_KEY" PRIMARY KEY (warehouse_id)
  );

The preceding statements created the table warehouse_tbl that has the primary key warehouse_id. Now, as you are familiar with the table creation syntax, let's create a sequence and use that in a table. You can create the hist_id_seq sequence using the following statement:

warehouse_db=# CREATE SEQUENCE hist_id_seq;

The preceding CREATE SEQUENCE command creates a new sequence number generator. This involves creating and initializing a new special single-row table with the name hist_id_seq. The user issuing the command will own the generator. You can now create the table that implements the hist_id_seq sequence using the following statement:

warehouse_db=# CREATE TABLE history
  (
  history_id INTEGER NOT NULL DEFAULT nextval('hist_id_seq'),
  date TIMESTAMP WITHOUT TIME ZONE,
  amount INTEGER,
  data TEXT,
  customer_id INTEGER,
  warehouse_id INTEGER,
  CONSTRAINT "PRM_KEY" PRIMARY KEY (history_id),
  CONSTRAINT "FORN_KEY"
  FOREIGN KEY (warehouse_id)
  REFERENCES warehouse_tbl(warehouse_id)
  );

The preceding query will create a history table in the warehouse_db database, and the history_id column uses the sequence as the default input value.

In this section, we successfully learned how to create a table and also learned how to use a sequence inside the table creation syntax.

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Altering tables

Now that we have learned how to create multiple tables, we can practice some ALTER TABLE commands by following this section. With the ALTER TABLE command, we can add, remove, or rename table columns.

Firstly, with the help of the following example, we will be able to add the phone_no column in the previously created table warehouse_tbl:

warehouse_db=# ALTER TABLE warehouse_tbl
  ADD COLUMN phone_no INTEGER;

We can then verify that a column is added in the table by describing the table as follows:

warehouse_db=# \d warehouse_tbl
            Table "public.warehouse_tbl"             
     Column     |          Type          | Modifiers 
----------------+------------------------+-----------
warehouse_id    | integer                | not null
warehouse_name  | text                   | not null
year_created    | integer                |
street_address  | text                   |
city            | character varying(100) |
state           | character varying(2)   |
zip             | character varying(10)  |
phone_no        | integer                |
Indexes:
  "PRIM_KEY" PRIMARY KEY, btree (warehouse_id)
Referenced by:
  TABLE "history" CONSTRAINT "FORN_KEY"FOREIGN KEY (warehouse_id) REFERENCES warehouse_tbl(warehouse_id) TABLE "history" CONSTRAINT "FORN_KEY" FOREIGN KEY (warehouse_id) REFERENCES warehouse_tbl(warehouse_id)

To drop a column from a table, we can use the following statement:

warehouse_db=# ALTER TABLE warehouse_tbl
  DROP COLUMN phone_no;

We can then finally verify that the column has been removed from the table by describing the table again as follows:

warehouse_db=# \d warehouse_tbl
            Table "public.warehouse_tbl"             
     Column     |          Type          | Modifiers 
----------------+------------------------+-----------
warehouse_id    | integer                | not null
warehouse_name  | text                   | not null
year_created    | integer                |
street_address  | text                   |
city            | character varying(100) |
state           | character varying(2)   |
zip             | character varying(10)  |
Indexes:
  "PRIM_KEY" PRIMARY KEY, btree (warehouse_id)
Referenced by:
  TABLE "history" CONSTRAINT "FORN_KEY" FOREIGN KEY (warehouse_id) REFERENCES warehouse_tbl(warehouse_id) TABLE "history" CONSTRAINT "FORN_KEY" FOREIGN KEY (warehouse_id) REFERENCES warehouse_tbl(warehouse_id)

Truncating tables

The TRUNCATE command is used to remove all rows from a table without providing any criteria. In the case of the DELETE command, the user has to provide the delete criteria using the WHERE clause. To truncate data from the table, we can use the following statement:

warehouse_db=# TRUNCATE TABLE warehouse_tbl;

We can then verify that the warehouse_tbl table has been truncated by performing a SELECT COUNT(*) query on it using the following statement:

warehouse_db=# SELECT COUNT(*) FROM warehouse_tbl;
 count 
-------
     0 
(1 row)
 

Inserting, updating, and deleting data from tables


In this section, we will play around with data and learn how to insert, update, and delete data from a table.

Inserting data

So far, we have learned how to create and alter a table. Now it's time to play around with some data. Let's start by inserting records in the warehouse_tbl table using the following command snippet:

warehouse_db=# INSERT INTO warehouse_tbl
  (
  warehouse_id,
  warehouse_name,
  year_created,
  street_address,
  city,
  state,
  zip
  )
VALUES
  (
  1,
  'Mark Corp',
  2009,
  '207-F Main Service Road East',
  'New London',
  'CT',
  4321
  );

We can then verify that the record has been inserted by performing a SELECT query on the warehouse_tbl table as follows:

warehouse_db=# SELECT warehouse_id, warehouse_name, street_address FROM warehouse_tbl;
 warehouse_id  | warehouse_name |        street_address         
---------------+----------------+-------------------------------
             1 | Mark Corp      | 207-F Main Service Road  East 
(1 row)

Updating data

Once we have inserted data in our table, we should know how to update it. This can be done using the following statement:

warehouse_db=# UPDATE warehouse_tbl
  SET year_created=2010
  WHERE year_created=2009;

To verify that a record is updated, let's perform a SELECT query on the warehouse_tbl table as follows:

warehouse_db=# SELECT warehouse_id, year_created FROM warehouse_tbl;
 warehouse_id | year_created 
--------------+--------------
            1 |         2010 
(1 row)

Deleting data

To delete data from a table, we can use the DELETE command. Let's add a few records to the table and then later on delete data on the basis of certain conditions:

warehouse_db=# INSERT INTO warehouse_tbl
  (
  warehouse_id,
  warehouse_name,
  year_created,
  street_address,
  city,
  state,
  zip
  )
VALUES
  (
  2,
  'Bill & Co',
  2014,
  'Lilly Road',
  'New London',
  'CT',
  4321
  );
warehouse_db=# INSERT INTO warehouse_tbl
  (
  warehouse_id,
  warehouse_name,
  year_created,
  street_address,
  city,
  state,
  zip
  )
VALUES
  (
  3,
  'West point',
  2013,
  'Down Town',
  'New London',
  'CT',
  4321
  );

We can then delete data from the warehouse.tbl table, where warehouse_name is Bill & Co, by executing the following statement:

warehouse_db=# DELETE FROM warehouse_tbl
  WHERE warehouse_name='Bill & Co';

To verify that a record has been deleted, we will execute the following SELECT query:

warehouse_db=# SELECT warehouse_id, warehouse_name
  FROM warehouse_tbl
  WHERE warehouse_name='Bill & Co';
 warehouse_id | warehouse_name 
--------------+----------------
(0 rows)

Note

The DELETE command is used to drop a row from a table, whereas the DROP command is used to drop a complete table. The TRUNCATE command is used to empty the whole table.

 

PostgreSQL's supported data types


PostgreSQL is very vast when it comes to supporting different data types. Apart from existing data types, a user can add new data types in PostgreSQL. In order to add new data types, a user can use the CREATE TYPE command. The following list shows a number of built-in data types that are supported by PostgreSQL.

Let's go through all the supported data types one by one:

  • bigint: This is a signed 8-byte integer, and it is represented as int8.

  • bigserial: This is an auto incrementing 8-byte integer represented by the serial8 keyword.

  • bit[(n)]: This is a fixed length bit string and is represented by the bit keyword.

  • bit varying [(n)]: This is a variable length bit string and is represented by the varbit keyword.

  • boolean: This is a logical Boolean expression and has the true or false value. It is represented by the bool keyword.

  • box: This is a rectangular box on a plane and is represented by the box keyword.

  • bytea: This is binary data stored in the form of a byte array and is represented by the bytea keyword.

  • character [(n)]: This is a fixed length character string, and it is represented by the char[(n)] keyword, where n represents the length of the string.

  • character varying[(n)]: This is a variable length character string, which is represented by the varchar[(n)] keyword.

  • cidr: This is used to store the IPv4 and IPv6 network addresses. It is represented by the cidr keyword.

  • circle: This is a circle on a plane, and it is represented by the circle keyword.

  • date: This is a calendar date, which includes the year, month, and day.

  • double Precision: This is a floating-point number of 8 bytes. It is represented by the float8 keyword.

  • inet: This is used to store the IPv4 or IPv6 host addresses. The essential difference between the inet and cidr data types is that inet accepts values with nonzero bits to the right of the net mask, whereas cidr does not.

  • integer: This is a signed 4-byte integer. It is represented by the int and int4 keywords.

  • interval [fields] [(p)]: This is used to represent the time span.

  • json: This is used to store the textual JSON data. Such data can also be stored as text, but the JSON data types have the advantage of enforcing that each stored value is valid according to the JSON rules. There are also assorted JSON-specific functions and operators available for data stored in these data types.

  • jsonb: This is used to store binary JSON data.

  • line: This is used to represent the infinite line in a plane.

  • lseg: This is used to store data of a line segment on a plane.

  • macaddr: This is used to store MAC addresses.

  • money: This data type is used to store the currency amount.

  • numeric[(p,s)]: This is used to store a numeric of selectable precision. It is represented by the decimal [(p,s)] keyword. The s parameter is used to represent the scale. The scale of a numeric is the count of decimal digits in the fractional part, to the right of the decimal point. The p parameter is used to represent precision. The precision of a numeric is the total count of significant digits in the whole number, that is, the number of digits to both sides of the decimal point. So, the number 23.5141 has a precision of 6 and a scale of 4.

  • path: This is used to store a geomantic path on a plane.

  • pg_lsn: This is used to store a PostgreSQL log sequence number.

  • points: This is used to store a geometric point on a plane.

  • polygon: This is used to store a closed geometric path on a plane.

  • real: This is used to store a single precision, floating-point number of 4 bytes. It is represented by the float4 keyword.

  • smallint: This is used to store a signed 2-byte integer. It is represented by the int2 keyword.

  • smallserial: This is used to store an auto incrementing 2-byte integer. It is represented by the serial2 keyword.

  • serial: This is used to store an auto incrementing 4-byte integer. It is represented by the serial4 keyword.

  • text: This data type is used to store a variable length character string.

  • time[(p)][without time zone]: This is used to store the time of the day without a time zone.

  • time[(p)] with time zone: This is used to store the time of the day with a time zone.

  • timestamp [(p)][without time zone]: This is used to store the date and time without a time zone.

  • timestamp [(p)] with time zone: This is used to store date and time with a time zone.

  • time, timestamp, and interval: These data types accept an optional precision value p that specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range of p is from 0 to 6 for the timestamp and interval types.

  • Tsquery: This is used to store a text search query.

  • tsvector: This is used to store a text search document.

  • txid_snapshot: This is used to store the user-level transaction ID snapshots.

  • uuid: This is used to store universally unique identifiers.

  • xml: This data type is served as storage for XML data.

 

PostgreSQL's operators and usage


As PostgreSQL has multiple operators, we will explain all of them in detail in this section.

Logical operators

Logical operators are available in PostgreSQL, and these are:

  • AND

  • OR

  • NOT

In PostgreSQL, the values of true, false, and null are used as the valued logic system. For more detail, see the following truth table that shows how data types a and b can result in different values when combined with the different AND and OR logical operators:

a

b

a AND b

a OR b

TRUE

TRUE

TRUE

TRUE

TRUE

FALSE

FALSE

TRUE

TRUE

NULL

NULL

TRUE

FALSE

FALSE

FALSE

FALSE

FALSE

NULL

FALSE

NULL

NULL

NULL

NULL

NULL

You can then see from the following truth table how data type a can result in a different value when used with the NOT logical operator:

a

NOT a

TRUE

FALSE

FALSE

TRUE

NULL

NULL

Comparison operators

In PostgreSQL, we have the following comparison operators, as shown in the following table:

Operator

Description

<

Less than

>

Greater than

<=

Less than or equal to

>=

Greater than or equal to

=

Equal

<> or !=

Not equal

Mathematical operators

PostgreSQL also provides you with the following mathematical operators, as you can see in the following table:

Operator

Description

+

Addition

-

Subtraction

*

Multiplication

/

Division

%

Modulo (Remainder)

^

Exponentiation

|/

Square root

||/

Cube root

!

Factorial

!!

Factorial (prefix operator)

@

Absolute value

&

Bitwise AND

|

Bitwise OR

#

Bitwise XOR

~

Bitwise NOT

<<

Bitwise shift left

>>

Bitwise shift right

Apart from the logical, comparison, and mathematical operators, PostgreSQL also has operators for strings, binary strings, bit strings, date/time, geometric, network address, and text search. Details of these operators are beyond the scope of this book and can be studied in more detail in the PostgreSQL documentation available at http://www.postgresql.org/docs/9.4/static/functions-string.html.

 

Constraints in PostgreSQL


PostgreSQL offers support for constraints and has coverage of multiple-level constraints. Constraints are used to enforce rules on data insertion in tables. Only data that complies with the constraint rules is allowed to be added to the table. The constraints present in PostgreSQL are:

  • Unique constraints

  • Not-null constraints

  • Exclusion constrains

  • Primary key constraints

  • Foreign key constraints

  • Check constraints

We will explain all of these constraints one by one with supportive examples. Let's start with the unique constraints.

Unique constraints

A unique constraint is a constraint that at the time of an insertion operation makes sure that data present in a column (or a group of columns) is unique with regard to all rows already present in the table. Let's create a few tables using unique constraints in the following manner:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER UNIQUE,
  tool_name TEXT,
  tool_class NUMERIC
  );

Alternatively, the same constraint can be declared at the end of all columns. For instance, this can look like the following:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER,
  tool_name TEXT,
  tool_class NUMERIC,
  UNIQUE (tool_id)
  );

When defining the unique constraints for a group of columns, all columns must be listed separately using commas. Consider the following example:

warehouse_db=# CREATE TABLE cards
  (
  card_id INTEGER,
  owner_number INTEGER,
  owner_name TEXT,
  UNIQUE (card_id, owner_number)
  );

The preceding query will create the cards table with a unique constraint implemented on the card_id and owner_number columns. Note that the unique constraint is not applicable on null values. This means that in the cards table, two records can have the same record if they have card_id and owner_number as null.

Not-null constraints

A not-null constraint makes sure that a column must have some values and a value is not left as null. Drop the previously created tools table and create the tools table again using this constraint using the following example:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER NOT NULL,
  tool_name TEXT,
  tool_class NUMERIC
  );

The preceding query will create a table with a not-null constraint on the tool_id column. We can apply the not-null constraint on as many columns as we can. Consider the following example:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER NOT NULL,
  tool_name TEXT NOT NULL,
  tool_class NUMERIC
  );

The preceding query will create the tools table with not-null constraints on tool_id and tool_name.

Exclusion constraints

An exclusion constraint is used when comparing two rows on nominative columns or expressions using the nominative operators. The result of the comparison will be false or null. Consider the following example in which the conflicting tuple is given the AND operation together:

warehouse_db=# CREATE TABLE movies
  (
  Title TEXT,
  Copies INTEGER
  );

Using the ALTER TABLE command, we get the following:

warehouse_db=# ALTER TABLE movies
  ADD EXCLUDE (title WITH=, copies WITH=);

We will create an exclusion constraint above the ALTER TABLE command. The conditions for a conflicting tuple are AND together. Now, in order for two records to conflict, we'll use the following:

record1.title = record2.title AND record1.copies = record2.copies.

Primary key constraints

In PostgreSQL, we have support for primary key constraints, which is actually a combination of not-null constraints and unique constraints, which means that for a column to fulfill the primary key constraints limitation, it should be unique as well as not null. Let's create a few tables using primary key constraints:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER PRIMARY KEY,
  tool_name TEXT,
  tool_class NUMERIC
  );

You can also create a primary key constraint based on two columns. Consider the following example:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER,
  tool_name TEXT,
  tool_class NUMERIC,
  PRIMARY KEY (tool_id, tool_name)
  );

Foreign key constraints

Foreign key constraints state that the value in a column must be the same as the value present in another table's row. This is for the sake of maintaining the referential integrity between two interlinked tables. Consider the following examples to understand the concept of foreign key constraints. We will create two tables, and we will use the column of one table in the second table as a foreign key constraint:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER PRIMARY KEY,
  tool_name TEXT,
  tool_class NUMERIC
  );

This will create a table with primary key constraints:

warehouse_db=# CREATE TABLE tools_list
  (
  list_id INTEGER PRIMARY KEY,
  tool_id INTEGER REFERENCES tools (tool_id),
  list_name TEXT
  );

In the preceding query, we created a table with the name of tools_list that has a foreign key on the tool_id column with the tool_id reference column from the tools table.

Tip

A table can have multiple parent tables, which means that we can have more than one foreign key in a single table.

Check constraints

A check constraint lets you define a condition that a column must fulfill a Boolean expression. Let's understand this with some examples:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER PRIMARY KEY,
  tool_name TEXT,
  tool_class NUMERIC,
  tool_quantity NUMERIC CHECK (tool_quantity > 0)
  );

In the preceding query, we have created a table with check constraints on tool_quantity to make sure that it must be greater than 0.

You can also give your constraints a more user-friendly name, so see the following example in which we name the constraint positive_quantity:

warehouse_db=# CREATE TABLE tools
  (
  tool_id INTEGER PRIMARY KEY,
  tool_name TEXT,
  tool_class NUMERIC,
  tool_quantity NUMERIC
  CONSTRAINT positive_quantity CHECK (tool_quantity>0)
  );
 

Privileges in PostgreSQL


In PostgreSQL, multiple privileges are present for every object that is created. By default, the owner (or a superuser) of an object has all the privileges on it. In PostgreSQL, the following types of privileges are present:

  • SELECT

  • INSERT

  • UPDATE

  • DELETE

  • TRUNCATE

  • REFERENCES

  • TRIGGER

  • CREATE

  • CONNECT

  • TEMPORARY

  • EXECUTE

  • USAGE

There are different privileges associated with different objects. For instance, the EXECUTE privilege is associated with procedure. The GRANT command is used to grant any privilege to any user. Similarly, to take back privileges, the REVOKE command is used.

 

Summary


In this chapter, we learned how to utilize the SQL language for a collection of everyday Database Management System (DBMS) exercises in an easy-to-use practical way. We figured out how to make a complete warehouse_db database that incorporates DDL (create, alter, and truncate) and DML (insert, update, and delete) operators, all types of data types, and constraints. The knowledge learned in this chapter will allow you to easily manipulate data across different tables and help you successfully design a database.

In the next chapter, you will learn about the procedural languages in more detail.

About the Authors

  • Ibrar Ahmed

    Ibrar Ahmed is a senior database architect in an Enterprise PostgreSQL company. He started his development career in 1999. He has contributed to the PostgreSQL community, the company's PostgreSQL fork, and other open source communities such as Google Chrome. He also has experience in MySQL, Oracle, MongoDB and Hadoop (Hive, Hbase, Spark). He is a major contributor in integrating other databases with PostgreSQL. He also authored PostgreSQL Developer's Guide. He has a master's in computer science (1999) and MS in computer engineering (2015), and is finalizing his PhD.

    Browse publications by this author
  • Asif Fayyaz

    Asif Fayyaz is an open source technology lover and practitioner. He is currently working as a senior software quality assurance engineer at a leading PostgreSQL based company. He has been learning and using PostgreSQL for the last 7 years. His areas of interest are exploring PostgreSQL replication solutions and software test automation solutions for desktop, web, and mobile applications. His future endeavors include joining a sports channel for set-top box automation in Python.

    Apart from his professional activities, he, along with his dedicated friends, is keen to find ways that can make life easier for those who are facing the worst living conditions.

    His other passions include, but are not limited to, traveling to different places, trying different cuisines, and reading books if somehow permitted by his loving family.

    Browse publications by this author
  • Amjad Shahzad

    Amjad Shahzad has been working in the open source software industry for the last 10 years. He is currently working as a senior quality assurance engineer at a leading PostgreSQL-based company, which is the only worldwide provider of enterprise-class products and services based on PostgreSQL. Amjad's core expertise lies in the areas of pg_upgrade, slony and streaming replication, Cloud database, and database partitioning. His future endeavors include exploring PostgreSQL replication solutions.

    Apart from his professional activities, he is also involved in doing social activities that involve helping people stand on their feet. In his free time, he likes to explore nature by doing outdoor activities, including hiking, trekking, and nature photography.

    Browse publications by this author

Latest Reviews

(5 reviews total)
Goed handvat voor het verdere werk
A fine book, nicely structured. Although I would say there are features that aren't well described, like constraint triggers.
Good