autovacuum_max_workers sets the maximum number of autovacuum worker processes that may run simultaneously. Default is 3. Context is postmaster, so changing it requires a restart. The launcher process is separate and not counted against this number.
This is the parameter that gets raised from 3 to 10 by someone who has decided autovacuum is too slow, after which they discover that vacuum is, somehow, not actually any faster. There is a reason for that, and it is the most important thing to understand about this GUC.
The reason is autovacuum_vacuum_cost_limit. That parameter (which gets its own post in due course) sets a total I/O budget that is divided across all currently running autovacuum workers. If the cost limit is 200 and one worker is running, that worker gets 200. If three workers are running, each gets ~66. If ten workers are running, each gets 20. Adding workers without also raising the cost limit doesn’t increase total vacuum throughput; it just slices the same pie into more pieces. Each individual table now takes longer to vacuum. The launcher gets to start more vacuums in parallel; the actual rate at which dead tuples get reclaimed is approximately unchanged.
So when does raising autovacuum_max_workers actually help?
- Many tables needing attention at once. If you have hundreds of partitions, or a multi-tenant schema with thousands of tables, and your monitoring shows large tables waiting for a worker slot to free up, you have a parallelism problem. Add workers.
- Mixed table sizes. With three workers, three large tables can monopolize all of them for hours, leaving small tables that need quick attention queued behind them. More workers means small tables can slip in. (
autovacuum_naptimeinteracts here too.)
In both cases, raise the cost limit to match. A reasonable starting heuristic: keep the per-worker budget roughly constant. If you’re going from 3 workers to 6, double autovacuum_vacuum_cost_limit from its default of 200 to 400. The exact numbers depend on your storage’s I/O headroom — modern NVMe will tolerate aggressive vacuum settings that would have melted a 2010 SAN.
One operational note: autovacuum workers do not count against max_connections, but they do consume shared memory and proc array slots that are allocated at startup based on this parameter. Raising it costs a small amount of memory you will not miss.
Recommendation: Leave it at 3 unless you have evidence — concretely, autovacuum log entries showing tables waiting unreasonably long for a worker — that your workload needs more. When you do raise it, raise autovacuum_vacuum_cost_limit proportionally, or you have done nothing useful.