PostgreSQL 19 ships with parallel autovacuum. The new GUC autovacuum_max_parallel_workers caps the cluster-wide pool, and the per-table storage parameter autovacuum_parallel_workers lets you tune individual tables. Workers come out of the existing max_parallel_workers budget. Off by default. Good.
This is a real improvement, and a lot of people are going to turn it on for the wrong reasons.
What actually parallelizes
VACUUM has supported parallelism since PostgreSQL 13, but only for explicit VACUUM (PARALLEL n) invocations. PG19 finally lets autovacuum use the same machinery. The parallelism is over index cleanup: the heap scan that builds the dead-tuple list is still single-threaded, and so is the heap truncation at the end. What changes is the index pass — with N indexes on a table, autovacuum can hand indexes out to N − 1 worker processes plus the leader and clean them concurrently.
That word “indexes” is doing a lot of work in this post. Read it again.
If your table has one B-tree on the primary key and nothing else, parallel autovacuum does literally nothing for it. The leader handles the index, there are no other indexes to hand out, and the rest of VACUUM was never parallel in the first place. You will not see a speedup. You will not see any change at all.
If your table has eight indexes — a couple of B-trees, a GIN on a jsonb column, a couple of partials, a BRIN on a timestamp — now you have something parallelism can chew on. The GIN cleanup alone is often the dominant cost of vacuuming a wide table, and pushing it to a worker so the leader can clean three smaller B-trees in the meantime is an actual win.
So: the feature helps tables with many (or expensive) indexes. The feature does nothing for tables with one or two cheap indexes. Most tables in most databases are the second kind.
The CPU isn’t your problem
Here is where people get into trouble. They read “parallel” and think “faster,” and reach for autovacuum_max_parallel_workers = 4 cluster-wide.
VACUUM is overwhelmingly I/O-bound. The heap scan reads pages. The index cleanup reads index pages. The dead-tuple TID store (since PG17) is in memory, but the index pages themselves are on disk, and they have to be brought in. If your storage subsystem is already pinned during a single autovacuum on a single large table, adding three more workers does not unpin it. They are now all queued behind the same disk.
This is the situation on a great many cloud Postgres instances. A managed RDS or Cloud SQL instance with provisioned IOPS in the low thousands, vacuuming a moderately busy 200 GB table, is already at the storage ceiling. Parallelism here gives you the same total throughput, more context switches, and more max_parallel_workers slot consumption that the rest of your queries were trying to use.
The corollary: parallel autovacuum is a feature for environments with lots of indexes and storage that can keep up — provisioned NVMe, EBS gp3 sized for the workload, local SSD. Both conditions. Either alone is not enough.
Where the worker slots come from
autovacuum_max_parallel_workers is a cap on the autovacuum side, but the workers are drawn from max_parallel_workers, which is also the pool that per-query parallelism draws from. Set this without thinking and an autovacuum on a heavily-indexed table can consume four workers, leaving query parallelism with one slot during the same window.
The interaction with autovacuum_max_workers compounds this. If you allow three autovacuum workers to run concurrently, each with up to four parallel workers of its own, that is twelve worker slots in use just for vacuum, plus the three autovacuum leaders, plus any active query parallelism. Make sure max_parallel_workers is sized for the worst case before you turn anything on. (max_worker_processes too, while you’re in there.)
A reasonable configuration
For most clusters, leave it off. The default is zero parallel workers per autovacuum and that is the right answer for the table you were not going to think about anyway.
For specific large tables with many or expensive indexes — wide jsonb tables with GIN, large partitioned tables with multiple indexes per partition, the kind of table whose autovacuum durations are already a problem — set the per-table storage parameter:
1 ALTER TABLE huge_jsonb_thing
2 SET (autovacuum_parallel_workers = 2);
Two is usually plenty. Four is aggressive and you should benchmark before settling there. Anything higher and you are almost certainly contributing more to I/O queueing than to wall-clock vacuum time.
Then set autovacuum_max_parallel_workers cluster-wide to a value at least equal to the largest per-table setting you have, and verify max_parallel_workers is large enough to accommodate that on top of your normal query workload.
How to know if it’s working
Check pg_stat_progress_vacuum during an autovacuum on the target table. The leader and the parallel workers will show up there during the index-cleanup phase. If the workers appear and disappear quickly while the leader stays in the heap-scan phase for most of the run, you are looking at a table where parallelism wasn’t going to help anyway. Take it back off.
The other diagnostic is total autovacuum duration on that table over time, from pg_stat_user_tables or your monitoring of choice. If the durations don’t move after enabling it, the feature isn’t doing what you wanted, and the slots could be put to better use elsewhere.