These two parameters close out the bgwriter cluster. Together with bgwriter_delay, they govern how the background writer decides what to write each round, and they are where the actual leverage lives — the previous post ended by saying so explicitly. Here is why.

The bgwriter’s algorithm, in one paragraph

Each cycle, the bgwriter estimates how many buffers will be needed by backends in the near future. It does this by averaging the number of new buffers requested over recent cycles. It multiplies that average by bgwriter_lru_multiplier to get a target number of clean buffers to make available. It then scans the buffer pool’s LRU list, writing out dirty pages until it has produced that many clean buffers — or until it has written bgwriter_lru_maxpages pages, whichever comes first. Then it sleeps for bgwriter_delay and repeats.

So: bgwriter_lru_multiplier is the demand-prediction scaling factor, and bgwriter_lru_maxpages is the per-cycle work cap. The first decides how much the bgwriter wants to do; the second decides how much it’s allowed to do.

bgwriter_lru_maxpages

Maximum number of buffers the bgwriter will write in a single cycle. Default 100. Range 0 to 1073741823 (effectively unlimited). Setting it to 0 disables the bgwriter entirely — all dirty-buffer writes then fall to backends or the checkpointer.

The 100-pages default, combined with the 200ms cycle delay, gives the ~4MB/sec ceiling mentioned last post. On modern hardware this is conservative to the point of absurdity. The diagnostic that this is your bottleneck is pg_stat_bgwriter.maxwritten_clean — a counter incremented every time the bgwriter stopped because it hit the maxpages cap rather than because it satisfied predicted demand. A consistently rising maxwritten_clean means the bgwriter has more work it wants to do and is forbidden from doing.

Reasonable starting values on contemporary servers: 500 for a moderately busy OLTP workload, 1000 or higher for write-heavy systems with good storage. The actual cap is high enough that the parameter is effectively a knob, not a wall.

bgwriter_lru_multiplier

Multiplier applied to the recent buffer-demand average to set the target. Default 2.0. Range 0 to 10.0.

The default of 2.0 means: “make twice as many clean buffers available as recent rounds have needed.” A setting of 1.0 would be just-in-time — barely enough to satisfy projected demand, no cushion. Higher values build more cushion at the cost of writing pages that may not end up being needed.

The argument for raising it: bursty workloads where short spikes in buffer demand exceed the recent average, and you’d rather the bgwriter pre-clean a larger margin than have backends fall back to doing their own writes during the spike. The argument against: every buffer the bgwriter writes that doesn’t end up evicted is wasted I/O, and on a write-heavy workload that can include the same page written multiple times before the checkpointer would have flushed it once.

Reasonable values: 3.0 or 4.0 for bursty workloads. Higher than 5.0 is hard to justify. Lower than 2.0 is asking for trouble.

How they interact

bgwriter_lru_multiplier controls the demand signal — how much the bgwriter would write if unconstrained. bgwriter_lru_maxpages is the capacity ceiling on that signal. If your maxwritten_clean counter is rising, the multiplier doesn’t matter — you’re hitting the cap and need to raise the cap first. If maxwritten_clean is flat but buffers_backend is high, the cap isn’t the issue; the bgwriter is satisfied with the work it predicted but the prediction is too conservative for your actual demand. Raise the multiplier.

The diagnostic sequence:

  1. Check pg_stat_bgwriter. Is buffers_backend high relative to buffers_clean?
  2. If yes: is maxwritten_clean rising consistently?
  3. If yes: raise bgwriter_lru_maxpages first. Re-check.
  4. If no (maxwritten_clean is flat but backends are still writing): raise bgwriter_lru_multiplier. Re-check.
  5. If both are tuned and backends are still writing: your shared_buffers is undersized, or your workload’s working set genuinely exceeds the cache, or your checkpointer is the actual culprit.

What to actually do

For a moderately busy production database in 2026:

1bgwriter_delay = 200ms
2bgwriter_lru_maxpages = 500
3bgwriter_lru_multiplier = 3.0
4bgwriter_flush_after = 512kB

For a write-heavy OLTP workload on modern NVMe:

1bgwriter_delay = 100ms
2bgwriter_lru_maxpages = 1000
3bgwriter_lru_multiplier = 4.0
4bgwriter_flush_after = 512kB

These are starting points, not destinations. Measure pg_stat_bgwriter for a week, then adjust.

Recommendation: The defaults are too conservative for almost every production workload running on hardware built since the Obama administration. Start at the moderate-workload settings above; raise further if pg_stat_bgwriter shows maxwritten_clean climbing or buffers_backend consistently dwarfing buffers_clean. The bgwriter is one of the few PostgreSQL subsystems where the answer to “should I tune this?” is almost always “yes, a little.”