pgBackRest is now unmaintained. If you were running pgBackRest in production — and a lot of people were running pgBackRest in production — what do you actually do now?
The honest answer has three parts. First: the world has not ended. pgBackRest still works. The git repository still exists, the binaries you have installed still take backups, and your cron jobs do not know that David Steele has stopped maintaining the project. You have time to make a considered choice. Second: there is no drop-in replacement. If there were, this post would be five sentences long. There are several tools that cover overlapping subsets of pgBackRest’s feature set, and the right one for you depends on which subset of those features you actually used. Third: a community fork is plausible but not yet real, and you should not plan around vapor.
What follows is a sober, axis-by-axis comparison of the realistic alternatives. I am going to be blunt about what each one cannot do. The PostgreSQL ecosystem has a long-standing habit of recommending tools by listing their features; in this case, what each tool lacks matters more.
What pgBackRest actually did
Before we talk about replacements, we need to be clear about what we are replacing. pgBackRest’s reputation came from a specific combination of capabilities, and most of the alternatives implement some of these but not all of them.
- Parallel backup across multiple cores and multiple network connections. Not novel; most tools do this.
- Parallel restore, including parallel restore of incremental and differential backups. Substantially less common.
- Block-level incremental backup. pgBackRest tracked which 8 KB pages had changed since the last backup and copied only those, not whole relation files. This is the feature that made nightly incrementals on multi-terabyte clusters tractable.
- Delta restore. When restoring, pgBackRest could compare the existing data directory against the backup manifest and only copy back the pages that had changed. The feature that turned a four-hour disaster into a forty-minute disaster.
- Encrypted repository with AES-256, integrated rather than bolted on with a wrapper script.
- Async WAL archive that decoupled
archive_commandfrom the actual upload to remote storage, with bounded queue and back-pressure semantics. This is the feature that prevented a slow S3 endpoint from stalling your primary. - Repository on local disk, NFS, S3-compatible object storage, Azure Blob, or GCS, all with the same interface and the same operational semantics.
- Compression with multiple algorithms (gzip, lz4, zstd, bzip2) and per-segment compression levels.
- PITR with
--target-time,--target-name,--target-xid,--target-lsn— the full menu, and they all worked. - A reasonable command-line surface and reasonable documentation. Underrated.
If you used pgBackRest seriously in production, you were not paying attention to all of these features at once, but you were probably depending on at least four of them. The replacement question is: which ones, and which tool gives you those?
The realistic candidates
There are a lot of PostgreSQL backup tools. There are not a lot of production-quality, actively maintained, capable-of-replacing-pgBackRest PostgreSQL backup tools. The list is short.
Barman
The closest functional analogue. Barman has been around almost as long as pgBackRest, comes from EnterpriseDB, and is the tool most pgBackRest refugees will end up evaluating first.
What it does well: native streaming replication-based backup, native WAL archiving, multi-server management from a single backup host, hook scripts for pre/post phases, S3 / Azure / GCS support via barman-cloud, parallel backup, retention policy management, and a backup catalog you can actually reason about. The operational model — a dedicated Barman host that pulls backups from one or more Postgres servers — is clean and well-understood.
What it does not do well: parallel restore is implemented but historically less performant than pgBackRest’s. Block-level incremental backup is supported via rsync mode, which is functional but not in the same league as pgBackRest’s manifest-driven incremental. Delta restore is similarly available via rsync mode and similarly less aggressive. Async WAL archiving exists but the queue semantics are simpler.
If you ran pgBackRest with full backups and trusted retention, Barman is a fine replacement. If you ran pgBackRest with nightly block-level incrementals on a 40 TB cluster because that’s the only way the math worked, Barman is a downgrade you should benchmark before you commit to.
WAL-G
The cloud-first option. Originally a Yandex project, written in Go, designed around object storage from day one.
What it does well: parallel backup and parallel restore, both genuinely fast. Delta backup against a previous full or delta. S3, GCS, Azure, Swift, SSH, and (with caveats) local filesystem support. Encryption via libsodium or PGP. Compression in lz4, lzma, brotli, or zstd. PITR works. The Go binary deploys cleanly into a container. The codebase is healthier than the documentation suggests.
What it does not do well: the documentation. WAL-G’s documentation has improved over the years and is still the weakest part of the experience. Operationally, you will read the source. Block-level incremental, in the pgBackRest sense, is not the model — WAL-G uses delta backups against prior backups, which is functionally similar but not identical, and the failure modes are different. Backup verification is less developed than pgBackRest’s. The CLI is less polished and the error messages have a Yandex-via-translation quality.
If your repository is going to live in S3 (or compatible) and you value restore speed, WAL-G is the strongest candidate. It is also the option I would pick for a Kubernetes-native deployment, because the operational model assumes ephemeral compute and durable object storage rather than a long-lived backup host.
pg_back
This is where I have to be direct with you. pg_back is a competent tool, and I have nothing bad to say about it. It is not a pgBackRest replacement. It is a thin, well-written wrapper around pg_dump and pg_dumpall with retention and a few quality-of-life features. If you were using pgBackRest because you needed physical backups, parallel restore, block-level incrementals, or PITR, pg_back does not do any of these things. If you were using pgBackRest’s logical-export features (you weren’t; that’s not what pgBackRest is for), pg_back is a reasonable replacement for that subset.
I list it here because I have already seen it recommended as a pgBackRest alternative on at least three threads, and the recommendation is wrong. Move on.
EDB BART
The Backup and Recovery Tool from EnterpriseDB. Commercial, EDB-customer-oriented, capable. If you are an EDB customer, BART is a credible option and your account team will be happy to discuss it. If you are not, you are not going to become one to solve a backup problem; the licensing economics do not support it. Skip.
pgBarman + barman-cloud
The cloud variant of Barman. Same engine, S3/Azure/GCS as the storage backend, designed for the case where you do not want a dedicated backup host with local disk. Worth listing separately because the operational model is genuinely different from on-prem Barman, and because for a lot of cloud-native deployments this is the path of least resistance. Same caveats as Barman: not in the same league as pgBackRest on incremental and delta restore.
pg_basebackup with custom WAL archiving
The “I’ll just write it myself” option. pg_basebackup is in-tree, supported, well-tested, and produces a valid base backup. Combined with archive_command to S3, a retention script, and a tested restore procedure, you have a working backup system. For a single-cluster shop with a small data footprint, this is genuinely a reasonable choice.
It is not a reasonable choice for everyone. The custom-glue path is fine until the day you need to do parallel incremental restore against a sharded multi-terabyte cluster, and on that day you will discover that you have spent three years writing pgBackRest badly. If you can articulate why your scale is small enough that the custom-glue approach will hold up, take this path. If you cannot, do not.
Crunchy Data backup tooling
Crunchy is now Snowflake. Their backup tooling is part of the Crunchy Postgres distribution and the Crunchy Postgres Operator for Kubernetes, both of which are still maintained. Notably, the Crunchy Operator’s backup support is built on pgBackRest, which puts Crunchy in the position of having a strong incentive to keep pgBackRest viable in some form, archived upstream or not. This is a thing to watch but not a thing to plan around.
The decision
A short, honest matrix. None of these are perfect; all of them are real options.
You run on-prem or in a private cloud, you have a dedicated backup host, your data fits the Barman model. Use Barman. Test parallel restore against your actual data volume before you commit. If parallel restore performance is a hard requirement, run a pg_bench-driven restore benchmark of your largest cluster and compare against your pgBackRest baseline. If the gap is acceptable, you have your answer.
You run on a public cloud, your repository lives in S3-or-equivalent, you can run a Go binary. Use WAL-G. Plan for half a week of reading source code and writing wrapper scripts. The end state is solid.
You run on Kubernetes with a Postgres operator. Use what your operator uses. CloudNativePG, Zalando’s postgres-operator, the Crunchy Operator, and StackGres all have opinions; follow theirs. If you are the operator author, this is your week to start paying attention to backup tooling.
You have a single small cluster and a competent operator. pg_basebackup plus archive_command plus a tested restore procedure is fine. Test the restore procedure. Test it again next quarter.
You believed pgBackRest was non-negotiable infrastructure and there is no acceptable substitute. Two options. First: contribute to a fork. Several conversations are happening; the right thing for the ecosystem is for one of them to coalesce around a maintainer with funding. Second: pay someone to maintain it for you. Crunchy may end up effectively in this role; an enterprising consultancy may step up. The piece that has been missing is sustained funding, not technical capability.
What I would do
For a new project starting today: WAL-G if cloud-resident, Barman if on-prem. Both are real, both are maintained, both will still be around in three years.
For an existing pgBackRest deployment that works: do not panic and do not migrate before you have tested. The tool is not going to stop working tomorrow. Run your existing pgBackRest setup. Stand up Barman or WAL-G in parallel. Verify your restore numbers on the new tool against the same dataset. When you are confident, cut over. When you are not confident, do not cut over.
What I would not do is treat this as an emergency. The emergency was Monday. The work starts now, and the work is restoring backups in test environments and writing down the numbers.
That is, depressingly, the actual job.