PostgreSQL 19 widens MultiXactOffset to 64 bits. The ceiling that has periodically taken down high-concurrency, FK-heavy clusters is gone — not raised, eliminated. If you have ever been paged at 3 a.m. because your monitoring noticed pg_multixact/members/ filling up faster than autovacuum could reclaim it, this is for you.

What was actually broken

PostgreSQL tracks row-level locks taken by multiple transactions on the same tuple using MultiXacts. Each MultiXact gets a 32-bit MultiXactId, and the actual list of member XIDs (plus their lock status flags) is stored separately in the pg_multixact/members/ SLRU. To find a MultiXact’s member list, the system uses a MultiXactOffset — a cursor into the members array, also 32 bits.

That offset was the problem. MultiXactId wraparound has always been on the same conceptual footing as XID wraparound — annoying, but well-understood, and autovacuum_multixact_freeze_max_age exists precisely to keep it managed. The members offset was a different beast. A single MultiXact can hold many members. A workload that piles up dozens of lockers per row — most often FK-heavy schemas where many transactions hold a row in KEY SHARE mode against a referenced parent — burns through offset space at a multiple of the rate it burns through MultiXactId space.

The first time you hit this in production, the symptom is not subtle:

1ERROR: multixact "members" limit exceeded
2DETAIL: This command would create a multixact with 2 members, but the remaining
3space is only enough for 0 members.
4HINT: Execute a database-wide VACUUM in database with OID NNN with reduced
5vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.

Translation: the cluster is now refusing to take row locks until you vacuum your way out. On a large cluster with active foreign keys, “vacuum your way out” can mean hours of emergency work while writes pile up behind the lock-acquire path. It is a genuinely bad failure mode, and it is one of the few places where stock PostgreSQL goes from “very online” to “you should have noticed this last week” with very little warning.

What PG 19 changes

The MultiXactOffset type is now 64-bit. The on-disk SLRU segment structure changes accordingly, and pg_upgrade rewrites pg_multixact/offsets for you on upgrade — automatically, but not instantly. On a cluster with a long multixact history this is the slowest part of the upgrade. Plan the maintenance window with that in mind, and do not be surprised if your test environment (which has effectively no multixact history) finishes pg_upgrade in seconds while production takes minutes per terabyte. Rehearse the upgrade against a production-shaped clone if you have one. If you don’t, take a fresh pg_basebackup, run pg_upgrade against it, and time it.

The members file (pg_multixact/members/) does not change shape, but its addressable space does. With a 64-bit offset, you will not be the person who exhausts it.

The autovacuum machinery for MultiXactId itself is unchanged. autovacuum_multixact_freeze_max_age, vacuum_multixact_freeze_table_age, and the rest still exist and still do what they did. What is gone is the member-space emergency — the case where you have plenty of headroom on MultiXactId but the offset cursor has run out of room. In PG 19, that scenario does not exist. The aggressive vacuum that ran specifically against member exhaustion no longer needs to.

Tuning implications

Less than you might hope, and this is a feature.

If you previously had aggressive autovacuum_multixact_freeze_max_age settings (100M instead of the 400M default) specifically because you were worried about member-space pressure, you can probably back them off. Probably. Verify against your actual pg_multixact/members/ growth rate first. If your driving constraint was always MultiXactId space rather than member space, you do not get to relax anything.

If you had alerts that watched pg_multixact/members/ size on disk, keep them — disk consumption is still a real thing, and VACUUM still reclaims member space. What you can drop is the second-derivative alert on members offset consumption rate. That number no longer matters.

The one place to be slightly careful: monitoring queries that compute mxid_age() or read pg_controldata for NextMultiOffset need to handle the wider type. Most won’t notice, but anything that pulls these values into a 32-bit unsigned column in your monitoring database will. Check before you upgrade rather than after.

Should you care if you’ve never hit this?

If your workload uses lots of foreign keys and a high concurrent insert/update rate against parent tables, yes. You probably have not noticed because you have not yet hit the ceiling. PG 19 means you won’t.

If your workload is OLAP-shaped, single-writer, or otherwise low on row-level lock contention, this changes nothing observable. The ceiling was never close.

For anyone running a large multi-tenant SaaS on PostgreSQL — high write concurrency, heavy FK usage, lots of SELECT ... FOR KEY SHARE either explicitly or implicitly via the foreign-key trigger machinery — this is a much bigger deal than it sounds in the release notes.

The 32-bit member offset was one of the last places where PostgreSQL had a hard, non-negotiable ceiling that could be hit in production rather than in theory. PG 19 takes it off the board.