postgresql when it's not your job

3 April 2018

13:00

Change these: archive_mode and archive_command

(An intermittent series on PostgreSQL parameters whose default settings you should change.)

By default archive_mode is set to off, and thus archive_command is ignored. Even if you are not going to be using WAL archiving, you should change these. You can use settings such as:

archive_mode = on
archive_command = '/bin/true'

It requires a server restart to turn on archive_mode, but just a reload to change archive_command. Thus, you’re ready to start WAL archiving without a restart by getting these set in advance.

30 March 2018

21:28

Change This: wal_compression

(An intermittent series on PostgreSQL parameters whose default settings you should change.)

Introduced in PostgreSQL 9.5, wal_compression is off by default, but you should probably turn it on.

First, what does it do? The documentation helpfully explains:

When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when full_page_writes is on or during a base backup.

By default (that is, when full_page_writes = on, which is the default setting), the first time a data page is changed after a checkpoint, the entire data page is written into the write-ahead log. (Thomas Vondra has a good explanation of this and why it is done).

This results in a significant burst of extra WAL traffic after a checkpoint. (This is one of the reasons to increase the time period between checkpoints). This setting, when on, compresses those full pages images so that they don’t take up quite so much WAL space, reducing both the disk required to hold the WAL, and the amount of network traffic required to send the WAL information down to a streaming replica.

It’s almost always a benefit to turn it on. The only time it might hurt performance is if the database is heavily CPU-limited, so that the (small) extra CPU required to compress the WAL information is significant. That’s unlikely, so turn it on.

27 March 2018

18:12

Mountpoints and the Single PostgreSQL Server

Ultimately, a PostgreSQL database is just files, and those files have to be stored somewhere. Absent tablespaces, they are all stored in a single directory structure, traditionally called PGDATA. While smaller instllations can just use the defaults associated with the packaging, larger databases are often installed on their own volume with its own mountpoint.

Two rules to keep in mind when choosing a name for the mountpoint and the directory structure:

  1. Always include the major version number in the directory, but,
  2. Never include the major version in the mountpoint.

For example, /pgsql is a great mountpoint name, but /pgsql/10 isn’t. Instead, call the mountpoint /pgsql and call PGDATA /pgsql/10/data (for example) under it.

Why?

At some point, you’ll probably want to have more than one major version installed on the same server. For example, you may want to use pg_upgrade to upgrade between major versions. In --link mode (which is the most sensible way to use pg_upgrade), you need to create a new cluster for the major version, but that new cluster needs to be on the same file system as the old cluster, so that pg_upgrade can create hard links between the data files.

If the major version is baked into the mountpoint, you start getting things like /pgdata/9.6/10 and similar awful situations.

So, make life easy for yourself and make the mountpoint generic, and use a directory path with the major version in it for PGDATA.

20 March 2018

06:28

Why PostgreSQL is Terrible? at Nordic PGDay

The slides from my talk at Nordic PGDay, Why PostgreSQL is Terrible, are now available.

2 January 2018

23:00

A Replication Cheat-Sheet

So many types of replication, so little time! What kind of PostgreSQL replication should you use? Read the rest of this entry »

30 October 2017

14:00

Commitment Issues

One of the essentials of any database system is that a transaction is either in progress, committed, or rolled back. But consider what happens if…

BEGIN;
UPDATE table SET money=money+100000;
COMMIT;
-- And you get an error that the server has disconnected

Did that transaction commit, or didn’t it? You don’t really know, because you don’t know if the server even received the COMMIT statement; if it didn’t, the transaction might have aborted due to the client disconnection.

This isn’t that important for the vast majority of situations, but if you are (for example) building a two-phase commit architecture, it might be vital to know if the transaction really did commit or not.

In PostgreSQL, you can find this out using txid_current() and txid_status():

BEGIN;
UPDATE table SET money=money+100000;
SELECT txid_current();

 txid_current 
--------------
         8168
(1 row)
COMMIT;
-- And you get an error that the server has disconnected

Now, you can use that result to test the status of that transaction (as long as it is still available in the commit log). So, if a disconnection occurs, the client can reconnect and:

SELECT txid_status(8168);

 txid_status 
-------------
aborted
(1 row)

Now, the client knows it needs to rerun that transaction.

Again, most client applications don’t need this level of detail, but it’s great to have the capability if you do.

27 October 2017

01:29

Streaming replication stopped? One more thing to check.

We recently were asked by a client to look at a strange replication situation. Replication had simply stopped from a primary to a streaming replication secondary. Everything looked fine:

After investigation, and some back-and-forth on mailing lists, we discovered the problem was a data ingestion procedure. At the highest level, the stored procedure created a temporary table, filled it, did some data analysis on it, and then inserted the results into a permanent table. Two changes had been made just before the replication stopped:

  1. The number of temporary tables created in each “run” of the stored procedure had been increased from 10 to 20,000.
  2. A bug had been introduced that caused an exception within the loop, but that exception had been caught and discarded.

Logically, what was happening resembled this:

DO $$
DECLARE
  i int := 1;
BEGIN
  FOR i IN 1..20000 LOOP
     BEGIN
         PERFORM f();
         i := i / 0;
     EXCEPTION
        WHEN division_by_zero THEN
     END;
  END LOOP;
END;
$$ language plpgsql;

where f() is:

CREATE OR REPLACE FUNCTION f() RETURNS VOID AS $$
BEGIN
 CREATE TEMPORARY TABLE test_table 
   ON COMMIT DROP AS SELECT i FROM generate_series(1, 100) i;
END:
$$ language plpgsql;

So, what happened?

Each time PostgreSQL creates a temporary table (or any table, for that), it takes an ExclusiveLock on it. That lock is held until the end of the transaction. In this case, each time the exception was raised, that rolled back a savepoint that PL/pgSQL implicitly creates at the start of the BEGIN/END, releasing the lock.

But only on the primary.

The secondary received the lock (one of the only types of lock a secondary can have), and held it until the entire wrapping transaction commits or rolls back. This are a lot of locks in this case. PostgreSQL handles this situation, but not entirely gracefully: the time needed up clean up all those locks when the COMMIT arrives at the secondary appears to be O(N^2), N being the number of locks.

Reducing the number of iterations (and fixing the bug) caused the issue to go away.

Interestingly, we’ve not been able to reproduce in a simple test case the lack-of-error-messages; in the test case above, all kinds of errors are emitted on the secondary.

Special thanks to Tom Lane for helping track this down.

01:06

“Humans do not have a primary key” at PGConf.EU

The slides for “Humans do not have a primary key” as presented at PGConf EU are available here. Three links from the presentation are:

7 September 2017

10:19

“Humans do not have a primary key” at PGOpen SV

I’ll be presenting “Humans do not have a primary key” at PGOpen SV today. Two links from the presentation are:

5 September 2017

08:51

A brief reminder about casting TIMESTAMPTZ

I spent a few minutes worrying I had lost my mind: Results coming back to my application were different from those obtained by running psql directly on the database server, for identical queries.

As is often the case, the problem was time zones.

In particular, when casting a TIMESTAMPTZ to a DATE, the current client time zone setting is used. So:

xof=# BEGIN;
BEGIN
xof=# set time zone 'UTC';
SET
xof=# select now()::date;
    now     
------------
 2017-09-05
(1 row)

xof=# set time zone 'Australia/Lord_Howe';
SET
xof=# select now()::date;
    now     
------------
 2017-09-06
(1 row)

xof=# COMMIT;
COMMIT

(Extra credit for knowing why I wrapped this example in a transaction block!)

Just a reminder to be very careful when converting items from time zone-aware formats (like TIMESTAMPTZ) to non-aware ones (like TIMESTAMP or DATE). And always store data as a time zone-aware type, unless there is a very compelling reason to do otherwise.

« Older Entries

Newer Entries »