postgresql when it's not your job

18:52

Do not change autovacuum age settings

8 February 2019

PostgreSQL has two autovacuum-age related settings, autovacuum_freeze_max_age, and vacuum_freeze_table_age.

Both of them are in terms of the transaction “age” of a table: That is, how long it has been since the table has been scanned completely for “old” tuples that can be marked as “frozen” (a “frozen” tuple is one that no open transaction can cause to disappear by a rollback). In short, the “oldest” a table can become in PostgreSQL is 2^31-1 transactions; if a table were ever to reach that, data loss would occur. PostgreSQL takes great pains to prevent you from eaching that point.

The “vacuum freeze” process is the process that scans the table and marks these tuples as frozen.

vacuum_freeze_table_age causes a regular autovacuum run to be an “autovacuum (to prevent xid wraparound)” run, that is, an (auto)vacuum freeze, if the age of the table is higher than vacuum_freeze_table_age.

autovacuum_freeze_max_age will cause PostgreSQL to start an “autovacuum (to prevent xid wraparound)” run even if it has no other reason to vacuum the table, should a table age exceed that setting.

By default, vacuum_freeze_table_age = 100000000 (one hundred million), and autovacuum_freeze_max_age = 200000000 (two hundred million).

Do not change them.

In the past, I made a recommendation I now deeply regret. Because, before 9.6, each autovacuum freeze run scanned the entire table, and (on its first pass) potentially rewrote the entire table, it could be very high I/O, and when it woke up suddenly, it could cause performance issues. I thus recommended two things:

  1. Increase autovacuum_freeze_max_age and vacuum_freeze_table_age, and,
  2. Do manual VACUUM FREEZE operations on the “oldest” tables during low-traffic periods.

Unfortunately, far too many installations adopted recommendation #1, but didn’t do #2. The result was that they cranked up autovacuum_freeze_max_age so high that by the time the mandatory autovacuum freeze operation began, they were so close to transaction XID wraparound point, they had no choice but to take the system offline and do the operation in single-user mode.

Thus, I am forever rescinding that advice. Especially now that 9.6 is incremental, the I/O penalty of an autovacuum freeze is greatly reduced, and the consequences of not doing it are severe.

Don’t increase those parameters. Let autovacuum freeze do its job. If you want to stay ahead of it, we have a script to do opportunistic freezing that might be helpful.

Keith at 21:10, 8 February 2019:

As long as you’re monitoring for transaction ID exhaustion, I don’t see any issue with increasing autovacuum_freeze_max_age. I’ve never increased the other one myself. I wouldn’t go setting it above 50% of exhaustion (1billion), but for databases with really high transaction rates, 200mil is really conservative. Flat out saying to never increase it is a bit too aggressive of a position I think. I agree that 9.6+ has made it much less if a concern now, but it really depends on the system involved.

I think the better statement would be to never touch these settings unless you’re actually monitoring for exhaustion/wraparound.

Xof at 21:13, 8 February 2019:

Well, sure, but: If the installations had followed the original advice of doing preventative off-peak vacuum freezes, they wouldn’t have gotten into trouble, either.

At this point, I don’t see a particular benefit in raising the values.

Morganwill at 05:37, 9 February 2019:

Thank you so much, I’ve already found many helpful tips here.
As I’m desperately looking for some solutions, which i can also use for my company Indiaaccess and Hostasp Servers

Thanks for your info

Walex at 14:39, 10 February 2019:

«Do manual VACUUM FREEZE operations on the “oldest” tables during low-traffic periods.
Unfortunately, far too many installations adopted recommendation #1, but didn’t do #2.»

Fascinatingly pretty much the same was reported by Stonebraker 38 years ago in “Retrospection on a database system” (section 4.5, page 235) where he says that he would not choose static (ISAM-style) indices again because DBAs cannot be relied upon to configure them right or rebuild them periodically, concluding “database administrators often had trouble with performance decisions and we now believe that they should be relieved of all possible choices”.