A serene alpine lake, at early morning. A small family of elephants is splashing around in the lake, having a great time.

Last year’s acquisitions have now shipped products, and for the first time it is possible to compare the Snowflake and Databricks “Postgres-in-the-lakehouse” strategies as real things rather than as marketing decks.

The acquisitions: Snowflake bought Crunchy Data in June 2025 (around $250M) and released pg_lake as open source under Snowflake-Labs in November. Databricks bought Neon in May 2025 (around $1B) and launched Lakebase, with AWS GA earlier this year and Azure currently in public preview. Both deals closed within weeks of each other, and they produced two very different architectures.

pg_lake: bring the lake to Postgres

pg_lake is a Postgres extension that reads and writes Iceberg tables and raw object-store files (S3, GCS, Azure Blob) directly from a Postgres backend. It is closer to a very capable foreign data wrapper than to anything else. Queries against Iceberg tables go through the Postgres planner and executor. Transactions remain Postgres transactions — but they commit changes to local Postgres relations, not to the Iceberg tables themselves. Reads of Iceberg data are federated at query time.

Snowflake Postgres, the managed offering on top of this, reached GA earlier this spring. The pitch is clean: you write Postgres, your SELECT joins a local OLTP table against an Iceberg table sitting in S3, and the Postgres planner sees both.

Lakebase: bring Postgres to the lake

Lakebase is Neon. Neon’s architecture — separated storage and compute, copy-on-write branching, a pageserver fronting an object-store backend — has been rehomed onto Databricks’ infrastructure. The result is a serverless Postgres that lives inside the lakehouse rather than bolting on to it. Operational data and analytical data are in the same system because the same storage layer holds both.

April 2026 added customer-managed keys for Lakebase Autoscaling projects — compliance plumbing more than an architectural shift, but it moves Lakebase into enterprise-gated territory.

Which one answers which question

These are not the same product dressed differently. They solve different problems.

pg_lake is the right answer when you have a working OLTP Postgres, a working lakehouse, and you want them to join at query time. The Postgres planner is the cost surface, and federation means transaction boundaries stop at the Iceberg edge. If your analytical tables are actually analytical — big, append-mostly, queried in aggregate — this works well. If your analytical tables are hot OLTP tables that somebody has been promising to “move to the warehouse” for three quarters, pg_lake is going to disappoint you.

Lakebase is the right answer when you want operational workloads to live inside the lakehouse from day one. Storage-compute separation and branching matter enormously if your use case is agentic: spin up a fresh branch, let an agent beat on it, throw it away. That workload does not land on pg_lake nearly as cleanly.

Neither is a replacement for the other, and neither is a replacement for a plain self-hosted Postgres. The failure domain, the planner surface, and the transactional semantics are different enough that the choice is architectural, not vendor-preference.

What to do

Pick the question first. If the answer is “my OLTP and my lake are different systems and I want Postgres to query both,” look at pg_lake. If the answer is “I want Postgres-the-OLTP to live inside my lakehouse,” look at Lakebase. Do not let a sales deck tell you they are competing on the same axis.