In the last twelve months, three of the biggest data-platform companies have shipped a Postgres-flavored database with a custom storage layer and a “scale-out compute, shared storage” architecture. Snowflake Postgres is GA, built on the Crunchy Data team’s work, with pg_lake as the lakehouse hook. Databricks Lakebase is GA on AWS, public preview on Azure, built on the Neon engine plus the Mooncake integration work. Azure HorizonDB is in invite-only preview and is the most aggressive of the three architecturally — Microsoft built their own engine, claims up to 3,072 vCores and 128 TB databases, and benchmarks 3× the throughput of stock Postgres on OLTP.

All three are wire-compatible with Postgres. None of them is Postgres in the sense that matters to you. The question isn’t which one is best — they’re targeting overlapping but distinct workloads, and the meaningful answer depends on questions about your environment, not theirs.

The actual decision question

The honest first question is: what data platform are you already standardized on? If your analytics warehouse is Snowflake, the answer is Snowflake Postgres or no managed cloud-native PG. If your analytics platform is Databricks, the answer is Lakebase or no managed cloud-native PG. If you’re an Azure shop running on VMs and getting tired of it, the answer is HorizonDB or one of the others over private link with an asterisk attached.

The marketing materials will tell you each is the future of operational and analytical convergence. They are correct, in the sense that any of them will achieve that convergence — within the platform you’ve already paid for. The cross-platform story for all three is identical: it’s a cross-cloud egress bill with extra steps.

That’s the framework. The rest of this is the technical color you need to make the call.

What each one actually is

Snowflake Postgres is the most “Postgres-like” of the three. The engine is recognizably PG, the extension story is reasonable, and the lakehouse integration via pg_lake is genuinely well-designed. pg_lake is open source and works against any Postgres, which means the in-Snowflake version isn’t a captive feature — you can prototype on stock PG and migrate. The pitch is “your operational data lives next to your analytical data, and the operational side is a real Postgres.” That pitch holds up. The cost is that you are now buying Postgres from Snowflake, and Snowflake’s pricing is Snowflake’s pricing.

Lakebase is the most interesting of the three for developers. The Neon-derived branching model is a real feature: instant database branches for CI/CD, point-in-time recovery as a normal operation rather than a disaster procedure, separation of compute from storage in a way that makes scale-to-zero cheap. The pitch is “Postgres for the AI era,” which is marketing for “Postgres next to your Databricks workspace.” It is a good product if you live in Databricks. It is a strange product if you don’t.

Azure HorizonDB is the most architecturally ambitious. Microsoft did not buy a Postgres company; they built a from-scratch storage engine that speaks the Postgres wire protocol and SQL surface. The performance numbers, if they hold up under independent testing, are credible — shared-storage / scale-out-compute architectures genuinely do beat single-primary Postgres at the largest scales. The cost is that “wire compatible” and “actually Postgres” are different things, and the gap between them matters in proportion to how much extension and tooling surface you depend on.

What you actually lose

This is the part that gets glossed over in vendor materials. With each of these, you lose some combination of:

  • Extensions. Each fork supports a subset. PostGIS support is generally good. Less common extensions are coin-flips. Anything that ships its own background worker is a coin-flip with a thumb on the scale.
  • Logical replication. All three handle this differently. Snowflake Postgres comes closest to stock behavior; Lakebase’s branching model and HorizonDB’s shared-storage architecture both have implications for logical decoding that are not fully documented yet. If you’re running logical replication out today, that’s the first thing to test.
  • Operational tooling. pg_basebackup doesn’t apply. pgBackRest doesn’t apply. Patroni doesn’t apply. Your existing operational muscle memory is mostly transferable for queries and largely useless for everything else.
  • Predictable upgrade paths. Each vendor controls when you move PG versions. You don’t get to test PG 19 on your schedule.

What you actually gain

Operational scale you would not get from running Postgres yourself. None of these is a small claim. Multi-zone commit at the latency Microsoft is quoting for HorizonDB is genuinely hard to replicate. Lakebase’s branching is genuinely useful. Snowflake’s lakehouse-OLTP integration is genuinely tighter than the alternative of “ETL between Postgres and Snowflake.”

You also gain a vendor relationship, with everything that implies in both directions.

The recommendation

Pick the one whose adjacent data platform you already use, and do not pretend you have a choice you don’t have. If you have no adjacent data platform, run actual Postgres on actual instances or use one of the conventional managed services (Aurora, Cloud SQL, Azure Database for PostgreSQL, Crunchy Bridge, EDB, pgEdge). The cloud-native scale-out story is real, but it is a real story for a small fraction of workloads. Most production Postgres still fits comfortably on a single beefy primary with a couple of replicas, and “I might need 3,000 vCores someday” is not a reason to buy a database today.

The interesting development isn’t that there are three of these. It’s that all three are happening in the same eighteen months. The shared-storage scale-out architecture for Postgres has converged into a real category. Watch how it shakes out. Don’t be the first one to bet your operational stack on the preview.