autovacuum_work_mem sets the maximum memory each autovacuum worker may use for tracking dead tuple identifiers (TIDs) during a vacuum. Default is -1, which means “inherit from maintenance_work_mem.” Context is sighup. The parameter exists so that autovacuum’s memory consumption can be tuned independently of the memory used by manual VACUUM, CREATE INDEX, REINDEX, and other one-off maintenance operations.

The motivation: each autovacuum worker allocates this memory independently. With autovacuum_max_workers = 3 (the default) and maintenance_work_mem = 1GB, a fully-busy autovacuum can consume 3GB before any manual maintenance starts. People sizing maintenance_work_mem upward for fast manual REINDEX operations were unintentionally giving each autovacuum worker the same large allocation. autovacuum_work_mem lets you set a lower per-worker limit for the background process while keeping the higher value for foreground operations.

The PostgreSQL 17 sea change

This is one of the parameters where the right operational advice fundamentally changed in PostgreSQL 17. To understand why, the failure mode it controls:

When vacuum scans a table, it builds a list of TIDs of dead tuples. To clean indexes, it then walks each index and deletes entries pointing at any TID in the list. If the TID list does not fit in memory, vacuum must stop, do a complete pass over every index, discard the list, restart the table scan from where it stopped, build a new list, do another complete pass over every index, and so on. On a heavily-indexed table, multi-pass index cleanup is catastrophic — twelve indexes, ten passes, hours instead of minutes.

Before PG 17, vacuum stored TIDs in a flat array, capped at 1GB regardless of how much memory you gave it — that was the limit of a 32-bit element count, and it held about 179M dead tuples. If your largest table could accumulate more than 179M dead tuples between vacuums, you were guaranteed multi-pass index scans, and the only fix was per-table scale factor tuning to keep dead-tuple counts down.

PG 17 replaced that flat array with a TIDStore backed by an adaptive radix tree. The 1GB cap is gone, the same memory now holds many more TIDs, and on most workloads vacuum uses a fraction of the memory it used to need. Multi-pass index cleanup, while still possible, is now genuinely rare.

Tuning

On PG 16 and earlier:

  • The 1GB cap is the operational ceiling regardless of what you set. Anything above that is wasted.
  • Setting autovacuum_work_mem = 1GB explicitly is reasonable on heavily-updated large tables to ensure single-pass index cleanup.
  • The real fix for tables exceeding 179M dead tuples is per-table autovacuum_vacuum_scale_factor, not memory tuning.

On PG 17 and later:

  • Leave it at -1 and let it inherit maintenance_work_mem. The radix tree handles most workloads with far less memory.
  • If you have raised autovacuum_max_workers significantly (say, to 6 or more), consider setting autovacuum_work_mem lower than maintenance_work_mem to cap total autovacuum memory consumption — the radix tree does its work in less, and you free memory for foreground operations.
  • Watch pg_stat_progress_vacuum.index_vacuum_count during long autovacuums. A value greater than 1 means you ran out of memory and entered multi-pass territory. On PG 17 this is uncommon enough to be worth investigating when it happens.

Recommendation: On PG 17+, leave at -1. On PG 16 and earlier, set explicitly to 1GB on production systems with heavily-updated large tables, and tune autovacuum_vacuum_scale_factor per-table for anything that can accumulate more than ~150M dead tuples between vacuums. Either way, the right way to monitor whether your setting is adequate is by watching for index_vacuum_count > 1 in autovacuum log entries (log_autovacuum_min_duration = 0 will show you all of them).