postgresql when it's not your job

28 October 2010

11:24

Small PostgreSQL Installations and 9.0 Replication

Yesterday, I commented on a post about how widespread uptake on 9.0 replication will be. I disagreed with the assessment that “users” (by which we mean small installations of PostgreSQL, defined however you care to) will not be interested in 9.0’s hot standby/streaming replication.

Ultimately, of course, we’ll find out. But I strongly feel that 9.0’s streaming replication will be a big deal for small PostgreSQL installations… indeed, I think it will be a much bigger deal for them than big ones.

First, I’ll happily exclude hobbyist and developer installs of PostgreSQL. I don’t back up my development PG databases more often than once a day, and I certainly don’t have any kind of replication set up for them (unless that’s what I’m developing). The important part, the code, lives in a DVCS, and if I had to reconstruct the db from scratch, no big deal… indeed, I do it all the time.

I’m talking about small installations of PG that are used to as authoritative records of business-critical information: Web site transactions, for example. The fact that, traditionally, these users of PG haven’t been all that into replication solutions has nothing to do with their actual need for replication; instead, it has to do with the solutions they had available.

So, they make do with pg_dumpall and hope for the best… and then call someone like us if that doesn’t work.

But it is fallacious to conclude that because they are not using replication right now, they have no use for it. Ask a corner liquor store if they could afford to have an entire day’s worth of electronic transactions just vanish; I’ll bet a bottle of something cheap that they carry that the answer would be, “Of course not.” It might not be worth a $15,000 consulting engagement to set it up, but it’s worth something, possibly quite a bit.

Indeed, this is one of the things that’s driving adoption of “cloud computing”: The (sometimes erroneous) idea that the cloud provider is managing disaster recovery and high availability for you, included in the cost of your monthly service charge.

tl;dr: PG 9.0’s streaming replication will be widely adopted by smaller installations that use PG to manage business-critical data, specifically because it makes it something a casual DBA can do, something we’ve not had before with PG.

27 October 2010

21:38

Users Want Functionality, Not Features

Over at the Command Prompt blog, Joshua Drake makes a (probably deliberately) provocative point about “users” not wanting replication, as opposed to “customers” who do. I’ll confess I’m not 100% sure about his distinction between “users” and “customers,” so I’ll just make something up: Users are the people sitting in front of the application, entering data, buying shoes, or doing whatever it is that the database enables; customers are the CIOs, CTOs, Directors of Engineering, and the other people who make purchasing decisions.

He writes:

Yes, Command Prompt customers want replication. Yes, PostgreSQL Experts, EntepriseDB and OmniTI customers want replication. However, customers are not users. At least not in the community sense and the users in the community, the far majority of them do not need or want replication. A daily backup is more than enough for them.

Well, yes, as far as it goes, he’s absolutely right. Users don’t need or want replication. They don’t need or want PostgreSQL, for that matter; VSAM, flat files, or a magic hamster would be fine with them, too, as long as the data that comes out is the data that goes in.

But for how many users, really, is “It’s OK if you lose today’s data, gone, irretrievably, pffft, yes?” really an acceptable answer? Very few. Very very few, and getting fewer all the time. One of the strongest pushes behind moving services into the “cloud” (i.e., external hosting providers of various kinds) is that they provide near-constant recovery and fault-tolerance. Users don’t care if their data is protected by hardware-level solutions like SANs, or software-level solutions like replication, as long as it is protected.

Users who profess not to care about this are either not putting authoritative data into a database, or just haven’t had the inevitable data disaster happen to them yet.

For me, the biggest feature of PostgreSQL’s 9.0 replication is that it is much, much easier to set up than any previous solution. Slony is a heroic project, and has lots of happy customers using it extensively, but it is notoriously fiddly and complex to set up.

Like a lot of technologies, replication hasn’t been a demand for a lot of PostgreSQL implementation because the cost didn’t seem worth the payoff. 9.0 brings the implementation cost way, way down, and thus, we’ll start seeing a lot more interest in putting replication in.

Of course, do the daily backups, too.

18:51

Things I Do Not Understand: “Web-Scale.”

What does this mean?

It clearly means something along the lines of, “Can handle lots of transactions per unit time,” but how many?

I mean, WordPress with WP-SuperCache is “web scale” if all that is meant is, “Can be used to implement a high volume site,” but I assume those who are touting something as “web scale” are aiming higher than that.

Anyone care to offer a quantitative definition of this term?

25 October 2010

19:34

Django and PostgreSQL “Idle In Transaction” Connections

A well-known issue that can come up with Django sites running on PostgreSQL is that connections in “Idle in Transaction” state can pile up. There’s a relatively straight-forward fix, but ultimately, it’s due to a bug in Django’s transaction management, at least when PostgreSQL is the back-end.

Let’s run through it. Read the rest of this entry »