The classic vacuum-trigger pair, finally. We have spent the last several posts on parameters that modify, cap, or supplement the trigger formula governed by these two; now we get to the originals.

The formula:

1vacuum threshold = autovacuum_vacuum_threshold
2 + autovacuum_vacuum_scale_factor × reltuples

When the number of obsoleted tuples (rows updated or deleted) since the last vacuum exceeds this value, autovacuum schedules a VACUUM on the table. On PostgreSQL 18+, the result is also capped by autovacuum_vacuum_max_threshold. Defaults: autovacuum_vacuum_threshold = 50, autovacuum_vacuum_scale_factor = 0.2. Both are sighup, both have per-table storage parameter overrides.

The threshold is the additive floor — small tables get vacuumed at a sensible cadence rather than constantly. The scale factor is the proportional component, and at 0.2 it means “wait until 20% of the table is dead before vacuuming.” For a table of a few thousand rows, fine. For a 100M-row table, waiting for 20M dead tuples is a lot of bloat. For a billion-row table, the math (200M dead tuples) is plainly unreasonable.

If this argument feels familiar, it is — it is the same argument we made for autovacuum_analyze_scale_factor and autovacuum_analyze_threshold, with one important practical difference. ANALYZE is cheap; you can run it as often as you like with no real consequence. VACUUM is not cheap. It reads pages, dirties pages, generates WAL, and contends with workload I/O. Triggering it too aggressively has costs that triggering ANALYZE aggressively does not. The right setting is the one that catches bloat early without producing a vacuum every five minutes on a hot table.

Tuning

The classic per-table override:

1ALTER TABLE orders SET (
2 autovacuum_vacuum_scale_factor = 0.02, -- 2% instead of 20%
3 autovacuum_vacuum_threshold = 1000 -- raise the floor
4);

For a 100M-row table, this triggers vacuum after roughly 2M dead tuples instead of 20M. For very heavily-updated tables, push the scale factor lower — 0.01 or 0.005. The threshold goes up to keep small/quiet tables from being vacuumed every minute when the scale factor is tiny.

A few operational considerations:

  • Heavily-indexed tables cost more to vacuum. The vacuum has to scan every index. A table with twelve indexes pays roughly twelve times the per-page cost during the index-cleanup phase. Triggering vacuum more often here is a real tradeoff, not a free win.
  • HOT updates are your friend. If your updates don’t change indexed columns, PostgreSQL can do a HOT (Heap-Only Tuple) update that doesn’t touch indexes at all. Tables with high HOT update ratios tolerate aggressive vacuum tuning much better. pg_stat_user_tables.n_tup_hot_upd tells you how often this is happening; it is the metric to look at before turning the vacuum dial down.
  • PG 18 changes the calculus. With autovacuum_vacuum_max_threshold capping the trigger at 100M by default, you only need per-table scale-factor tuning for tables where 100M dead tuples is itself too much bloat. For tables under 500M rows on PG 18, the default scale factor is now genuinely fine.

Versus the new world

If you are on PostgreSQL 17 or earlier, per-table tuning of autovacuum_vacuum_scale_factor for large tables is mandatory. It is the highest-impact piece of vacuum tuning available and is the difference between “the database is healthy” and “we have a bloat problem and don’t know why.”

If you are on PostgreSQL 18, the new max-threshold parameter does most of the same work without per-table configuration. Per-table scale-factor tuning is now reserved for the cases where the 100M cap is also too lenient — heavily-updated OLTP tables where bloat has to be controlled tighter than that.

Recommendation: Leave the globals alone. On PG 17 and earlier, identify any table over a few million rows and set per-table autovacuum_vacuum_scale_factor to 0.02 or smaller, with a generous autovacuum_vacuum_threshold. On PG 18, the same advice applies but only for tables where the 100M default cap is itself too high — which is fewer tables than you’d think. Either way, watch HOT update ratios before tuning aggressively.