Alphabetical order delivers our first casualty. archive_cleanup_command is a standby-server knob that exists entirely to tidy up after archive_command, which the alphabet insists on deferring until the next post. So we will describe how to sweep up a party we have not yet thrown.
The briefest-possible backfill: a PostgreSQL primary can archive its WAL segments to some location
The previous post on the Linux 7.0 pgbench regression ended with the same instruction every other Postgres performance post ends with: set huge pages. This post is the long version. If you have read the Postgres docs on huge_pages and you’re still not completely sure what /proc/meminfo is telling you, what the relationship is between vm.nr_hugepages and Transparent Huge
Last year’s acquisitions have now shipped products, and for the first time it is possible to compare the Snowflake and Databricks “Postgres-in-the-lakehouse” strategies as real things rather than as marketing decks.
The acquisitions: Snowflake bought Crunchy Data in June 2025 (around $250M) and released pg_lake as open source under Snowflake-Labs in November. Databricks bought Neon in May 2025 (around
Most GUCs in this series will be operationally irrelevant to most readers. This one is not. application_name is the single cheapest piece of observability infrastructure PostgreSQL ships, and an astonishing number of production databases are running with it unset or stuck at a client library’s default (psql, PostgreSQL JDBC Driver, or — my favorite — the empty
A benchmark came out of AWS earlier this month showing PostgreSQL throughput on Linux 7.0 dropping to 0.51x what the same workload produced on Linux 6.x. The Phoronix headline wrote itself. Hacker News did what Hacker News does. By the end of the week, I had been asked by three separate clients whether they needed to hold their kernel upgrades.
PostgreSQL 18 shipped asynchronous I/O. The dominant flavor on Linux was io_uring; everything else fell back to a worker pool controlled by io_method=worker. Early benchmarks from pganalyze, Aiven, and Better Stack showed real wins on read-heavy workloads with large sequential scans. They also showed that the worker fallback needed careful tuning — the default worker count did not
Here is a GUC that ships with a warning label. The docs, which are normally restrained to the point of parody, state plainly that setting this parameter wrong can cause “irretrievable data loss or seriously corrupt the database system.” When the PostgreSQL docs raise their voice, listen.
allow_system_table_mods is off by default. Turning it on lets a superuser perform
If you’re going to hire a PostgreSQL consultant, hire one. That means access to the database.
I’m writing this because the “we hired you but you can’t touch the thing” conversation happens at the start of roughly one in four PGX engagements, and I would like to have something to point at instead of having the same conversation over
allow_in_place_tablespaces exists so the PostgreSQL test suite can test replication. That’s it. If you’re reading this as an operator, you will never touch it. But it’s in the alphabet, so here we are.
When off (the default), CREATE TABLESPACE requires a LOCATION that points to an existing, empty, absolute directory path. The server creates a symbolic link in $PGDATA/pg_tblspc/
Robert Haas’s pg_plan_advice patch set, proposed for PostgreSQL 19, is where the twenty-year argument from Part 2 has landed — or is trying to. It is not pg_hint_plan brought into core. It is a different thing, with different mechanics, a different scope, and a different answer to the “why is this different from Oracle-style hints” question.