10:23
“Writing a Foreign Data Wrapper” at PGCon 2023
I’ll be speaking about Writing a Foreign Data Wrapper at PGCon 2023 in Ottawa, May 30-June 2, 2023. Do come! It’s the premiere technical/hacker conference for PostgreSQL.
10:23
I’ll be speaking about Writing a Foreign Data Wrapper at PGCon 2023 in Ottawa, May 30-June 2, 2023. Do come! It’s the premiere technical/hacker conference for PostgreSQL.
01:14
In a comment on my earlier post on max_wal_size
, Lukas Fittl asked a perfectly reasonable question:
Re: “The only thing it costs you is disk space; there’s no other problem with it being too large.”
Doesn’t this omit the fact that a higher
max_wal_size
leads to longer recovery times after a crash? In my experience that was the reason why you wouldn’t wantmax_wal_size
to e.g. be 100GB, since it means your database might take a while to get back up and running after crashes.
The answer is… as you might expect, tricky.
The reason is that there are two different ways a checkpoint can be started in PostgreSQL (in regular operations, that is; there’s a few more, such as manual CHECKPOINT
commands and the start of a backup using pg_start_backup
). Those are when PostgreSQL thinks it needs to checkpoint to avoid overrunning max_wal_size
(by too much), and when checkpoint_timeout
is reached. It starts a checkpoint on the first of those that it hits.
The theory behind my recommendations on checkpoint tuning is to increase max_wal_size
to the point that you are sure that it is always checkpoint_timeout
that fires rather than max_wal_size
. That in effect caps the checkpoint interval, so larger values of max_wal_size
don’t change the checkpoint behavior once it has reached the level that checkpoint_timeout
is always the reason a checkpoint starts.
But Lukas does raise a very good point: the time it takes to recover a PostgreSQL system from a crash is proportionate to the amount of WAL that it has to replay, in bytes, and that’s soft-capped by max_wal_size
. If crash recovery speed is a concern, it might make sense to not go crazy with max_wal_size
, and cap it at a lower level.
Pragmatically, crashes are not common and checkpoints are very common, so I recommend optimizing for checkpoint performance rather than recovery time… but if your system is very sensitive to recovery time, going crazy with max_wal_size
is probably not a good idea.
13:21
The reality is that most PostgreSQL configuration parameters don’t have a huge impact on overall system performance. There are, however, a couple that really can make a huge difference when tuned from the defaults. work_mem
is one of them, and max_wal_size
is another.
max_wal_size
controls how large the write-ahead log can get on disk before PostgreSQL does a checkpoint. It’s not a hard limit; PostgreSQL adapts checkpoint frequency to keep the WAL on disk no larger than that, but excursions above it can definitely happen. The only thing it costs you is disk space; there’s no other problem with it being too large.
Having max_wal_size
too small can cause checkpoints to happen very frequently. Frequent checkpointing is bad for two reasons:
shared_buffers
need to be written out.Here’s a process to set max_wal_size
properly:
First, set the general checkpoint parameters. This is a good start:
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
wal_compression = on
log_checkpoints = on
max_wal_size = 16GB
Then, let the system run, and check the logs (or any other tools you may have to determine checkpoint frequency). If the checkpoints are happening more frequently than every 15 minutes, increase max_wal_size
until they are being triggered by the timeout.
How about min_wal_size
? This controls the amount of reserved WAL files that PostgreSQL will retain on disk even if it doesn’t need it for other reasons. This can speed up the WAL slightly, since PostgreSQL can use one of those retained files instead of having to create a new one. There’s no harm in bumping it up (again, all it costs is disk space), but on nearly every environment, the performance impact is small.
07:10
The slides from my presentation “Real-World Logical Replication” are now available.
06:48
The slides are now available for my talk “Database Antipatterns, and where to find them” at SCaLE 20x.
13:27
If you google around for how to set work_mem
in PostgreSQL, you’ll probably find something like:
To set work_mem, take the number of connections, add 32, divide by your astrological sign expressed as a number (Aquarius is 1), convert it to base 7, and then read that number in decimal megabytes.
So, I am here to tell you that every formula setting work_mem
is wrong. Every. Single. One. They may not be badly wrong, but they are at best first cuts and approximations.
The problem is that of all the parameters you can set in PostgreSQL, work_mem
is about the most workload dependent. You are trying to balance two competing things:
You can prevent the second situation with a formula. For example, you can use something like:
50% of free memory + file system buffers divided by the number of connections.
The chance of running out of memory using that formula is very low. It’s not zero, because a single query can use more than work_mem
if there are multiple execution nodes demanding it in a query, but that’s very unlikely. It’s even less likely that every connection will be running a query that has multiple execution nodes that require full work_mem
; the system will have almost certainly melted down well before that.
The problem with using a formula like that is that you are, to mix metaphors, leaving RAM on the table. For example, on a 48GB server with max_connections = 1000
, you end up with with a work_mem
in the 30MB range. That means that a query that needs 64MB, even if it is the only one on the system that needs that much memory, will be spilled to disk while there’s a ton of memory sitting around available.
So, here’s what you do:
work_mem
, and then run the system under a realistic production load with log_temp_files = 0
set.2023-03-13 13:19:03.863 PDT,,,45466,,640f8503.b19a,1,,2023-03-13 13:18:11 PDT,6/28390,0,LOG,00000,"temporary file: path ""base/pgsql_tmp/pgsql_tmp45466.0"", size 399482880",,,,,,"explain analyze select f from t order by f;",,,"psql","parallel worker",44989,0
work_mem
to get rid of them is 2 times the largest temporary file (temporary files have less overhead than memory operations).Of course, that might come up with something really absurd, like 2TB. Unless you know for sure that only one query like that might be running at a time (and you really do have enough freeable memory), you might have to make some decisions about performance vs memory usage. It can be very handy to run the logs throughs through an analyzer like pgbadger
to see what the high water mark is for temporary file usage at any one time.
If you absolutely must use a formula (for example, you are deploying a very large fleet of servers with varying workloads and instance sizes and you have to put something in the Terraform script), we’ve had good success with:
(average freeable memory * 4) / max_connections
But like every formula, that’s at best an approximation. If you want an accurate number that maximizes performance without causing out-of-memory issues, you have to gather data and analyze it.
Sorry for any inconvenience.
08:35
I’m currently scheduled to speak at:
I hope to see you at one of these!
20:07
Over the course of the last few versions, PostgreSQL has introduces all kinds of background worker processes, including workers to do various kinds of things in parallel. There are enough now that it’s getting kind of confusing. Let’s sort them all out.
You can think of each setting as creating a pool of potential workers. Each setting draws its workers from a “parent” pool. We can visualize this as a Venn diagram:
max_worker_processes
sets the overall size of the worker process pool. You can never have more than that many background worker processes in the system at once. This only applies to background workers, not the main backend processes that handle connections, or the various background processes (autovacuum daemon, WAL writer, etc.) that PostgreSQL uses for its own operations.
From that pool, you can create up to max_parallel_workers
parallel execution worker processes. These come in two types:
Parallel maintenance workers, that handle parallel activities in index creation and vacuuming. max_parallel_maintenance_workers
sets the maximum number that can exist at one time.
Parallel query workers. These processes are started automatically to parallelize queries. The maximum number here isn’t set directly; instead, it is set by max_parallel_workers_per_gather
. That’s the maximum number of processes that one gather
execute node can start. Usually, there’s only one gather
node per query, but complex queries can use multiple sets of parallel workers (much like a query can have multiple nodes that all use work_mem
).
So, what shall we set these to?
Background workers that are not parallel workers are not common in PostgreSQL at the moment, with one notable exception: logical replication workers. The maximum number of these are set by the parameter max_logical_replication_workers
. What to set that parameter to is a subject for another post. I recommend starting the tuning with max_parallel_workers
, since that’s going to be the majority of worker processes going at any one time. A good starting value is 2-3 times the number of cores in the server running PostgreSQL. If there are a lot of cores (32 to 64 or more), 1.5 times might be more appropriate.
For max_worker_processes
, a good place to start is to sum:
max_parallel_workers
max_logical_replication_workers
Then, consider max_parallel_workers_per_gather
. If you routinely processes large result sets, increasing it from the default of 2 to 4-6 is reasonable. Don’t go crazy here; a query rapidly reaches a point of diminishing returns in spinning up new parallel workers.
For max_parallel_maintenance_workers
, 4-6 is also a good value. Go with 6 if you have a lot of cores, 4 if you have more than eight cores, and 2 otherwise.
Remember that every worker in parallel query execution can individually consume up to work_mem
in working memory. Set that appropriately for the total number of workers that might be running at any one time. Note that it’s not just work_mem
x max_parallel_workers_per_gather
! Each individual worker can use more than work_mem
if it has multiple operations that require it, and any non-parallel queries can do so as well.
Finally, max_parallel_workers
, max_parallel_maintenance_workers
, and max_parallel_workers_per_gather
can be set for an individual session (or role, etc.), so if you are going to run an operation that will benefit from a large number of parallel workers, you can increase it for just that query. Note that the overall pool is still limited by max_worker_processes
, and changing that requires a server restart.
18:37
Normally, when you drop a column from PostgreSQL, it doesn’t have to do anything to the data in the table. It just marks the column as no longer alive in the system catalogs, and gets on with business.
There is, however, a big exception to this: ALTER TABLE … SET WITHOUT OIDS
. This pops up when using pg_upgrade
to upgrade a database to a version of PostgreSQL that doesn’t support table OIDs (if you don’t know what and why user tables in PostgreSQL had OIDs, that’s a topic for a different time).
ALTER TABLE … SET WITHOUT OIDS
rewrites the whole table, and reindexes the table as well. This can take up quite a bit of secondary storage space:
pg_tmp
), it can take significant storage doing the reindexing, since it may need to spill the required sorts to disk. This can be mitigated by increasing maintenance_work_mem
.So, plan for some extended table locking if you do this. If you have a very large database to upgrade, and it still has tables with OIDs, this may be an opportunity to upgrade via logical replication rather than pg_upgrade
.
21:34
This topic pops up very frequently: “Should we use UUID
s or bigint
s as primary keys?”
One of the reasons that the question gets so many conflicting answers is that there are really two different questions being asked:
Let’s take them independently.
There are strong reasons for either one. The case for random keys is:
They’re more-or-less self-hashing, if the randomness is truly random. This means that if an outside party sees that you have a customer number 109248310948109, they can’t rely on you having a customer number 109248310948110. This can be handy if keys are exposed in URLs or inside of web pages, for example. You can expose 66ee0ea6-dad8-4b0b-af1c-bdc55ccd45e
to the world with a pretty high level of confidence you haven’t given an attacker useful information.
It’s much easier to merge databases or tables together if the keys are random (and highly unlikely to collide) than if the keys are serials starting at 1.
The case for sequential keys is:
Sequential keys are (sometimes much) faster to generate than random keys.
Sequential keys have much better interaction with B-tree indexes than random keys, since inserting a new key doesn’t have to consult as many pages as it does in a random key. Different tests have come up with different results on how big the performance difference is, but random keys are always going to be slower than sequential ones in this case. (Note, however, that the tests almost always compare bigint
to UUID
, and that’s conflating both the sequential vs random and 64-bit vs 128-bit properties.)
As we note below, “sequential” doesn’t automatically mean bigint
! There are implementations of UUIDs (or, at least, 128-bit UUID-like values) that have high order sequential bits but low order random bits. This avoids the index locality problems of purely random keys, while preserving (to an extent) the self-hashing behavior of random keys.
It’s often just taken for granted than when we say “random” keys, we mean “UUID
s”, but there’s nothing intrinsic about bigint
keys that means they have to be sequential, or (as we noted above) about UUID
keys that require they be purely random.
bigint
values will be more performant in PostgreSQL than 128 bit values. Of course, one reason is just that PostgreSQL has to move twice as much data (and store twice as much data on disk). A more subtle reason is the internal storage model PostgreSQL uses for values. The Datum
type that represents a single value is the “natural” word length of the processor (64 bits on a 64 bit processor). If the value fits in 64 bits, the Datum
is just the value. If it’s larger than 64 bits, the Datum
is a pointer to the value. Since UUID
s are 128 bits, this adds a level of indirection and memory management to handling one internally. How big is this performance issue? Not large, but it’s not zero, either.
So, if you don’t think you need 128 bits of randomness (really, 124 bits plus a type field) that a UUID
provides, consider using a 64 bit value even if it is random, or if it is (for example) 16 bits of sequence plus 48 bits of randomness.
If you are particularly concerned about exposing information, one consideration is that keys that have sequential properties, even just in the high bits, can expose the rate of growth of a table and the total size of it. This may be something you don’t want run the risk of leaking; a new social media network probably doesn’t want the outside world keeping close track of the size of the user
table. Purely random keys avoid this, and may be a good choice if the key is exposed to the public in an API or URL. Limiting the number of high-order sequential bits can also mitigate this, and a (probably small) cost in locality for B-tree indexes.