The last entry in the autovacuum cluster, and the newest. PostgreSQL 18 introduced autovacuum_worker_slots to solve a long-standing operational annoyance: changing autovacuum_max_workers used to require a server restart, which made it impossible to respond to evolving vacuum workload without a maintenance window. PG 18 fixes that by splitting the parameter in two.

The split

autovacuum_worker_slots reserves shared-memory slots for autovacuum workers at server startup. Default is 16, possibly less if kernel limits force initdb to pick a smaller number. Context is postmaster — changing it still requires a restart, because that’s when the shared memory is allocated.

autovacuum_max_workers, in PG 18, is now sighup. You can raise or lower the number of active workers on a running server, as long as you stay within the slots reserved at startup. If you set max_workers higher than worker_slots, the server caps it and logs a warning that you should remember to read.

The pattern: reserve generously at startup, tune live. It is the same design pattern as max_connections versus actual concurrent backends, or max_wal_senders versus active streams — pre-allocate the upper bound, run the system at whatever fraction of that bound is currently right. PostgreSQL has been moving toward this design across several subsystems for years, and autovacuum was overdue.

Why this matters

Pre-PG 18, raising autovacuum_max_workers from 3 to 6 in response to a multi-tenant database growing to 800 schemas was a maintenance window. You either set it high enough at the start and lived with the headroom, or you scheduled downtime to change it.

Post-PG 18, you set autovacuum_worker_slots = 16 at install time (or higher if you anticipate a really large fleet), and you can adjust autovacuum_max_workers from 3 to 6 to 12 at any time via pg_reload_conf(), observing the impact on vacuum throughput and disk I/O before committing to the change permanently. The cost of being wrong is a SIGHUP, not an outage.

Tuning

  • Leave the default of 16. It is enough headroom for almost every workload, and the only memory cost is a handful of shared-memory entries you will never miss.
  • Raise it only if you have a genuinely massive multi-tenant or multi-database deployment where you anticipate needing more than 16 concurrent autovacuum workers. The number is a hard ceiling on autovacuum_max_workers, so make sure it covers your worst-case scenario.
  • Lower it essentially never. Even on small servers, reserving 16 slots that you might use 3 of costs nothing meaningful.

Recommendation: Leave autovacuum_worker_slots = 16 at the default on PG 18. Use the new sighup-able autovacuum_max_workers to actually tune autovacuum concurrency live. This is the parameter that makes the autovacuum_max_workers post’s advice — “raise it when your monitoring shows tables waiting for workers” — finally executable without a maintenance window. After fifteen years.

That closes the autovacuum cluster. Sixteen posts on autovacuum-related parameters, one of which (autovacuum) governs whether any of the rest of them matter. Next up, B.