The Build

18 May 2012


Running PostgreSQL on AWS

My presentation from PGCon 2012, PostgreSQL on AWS with Reduced Tears, is now up.

25 April 2012


Of Pickups and Tractor-Trailers

Pickup trucks are great.

No, really. They are great vehicles. You can use them for all sorts of really useful things: Bringing your tools out to a construction gig. Delivering refrigerators. Helping your friend move a sofa. Carting away a reasonable amount of construction debris.

But if you need to deliver 75,000 pounds of steel beams to a construction site, in a single run? A pickup truck will not do it. Not even a big pickup. Not even if you add a new engine. Not even if you are willing to get three pickups. You need equipment designed for that. (And, as a note, the equipment that could handle delivering the steel beams would be a terrible choice for helping a friend move their sofa.)

“But,” I hear you say, “I already know how to drive a pickup! And we have a parking space for it. Can’t we just use the pickup? You’re a truck expert; tell us how to get our pickup to pull that load!”

And I say, “Being a truck expert, I will tell you again, a pickup the wrong kind of truck. There are other trucks that will handle that load with no trouble, but a pickup isn’t one of them. The fact that you have a pickup doesn’t make it the right truck.”

We have many clients that run PostgreSQL, happily, on Amazon Web Services.

Some clients, however, are not happy. They are attempting to haul tractor-trailer loads (such as high volume data warehouses) using pickup trucks (Amazon EC2 instances). They wish us to fix their problem, but are not willing to move off of Amazon in order to get the problem fixed.

I like AWS for a lot of things; it has many virtues, which I will discuss in detail soon. However, AWS is not the right solution for every problem. In particular, if you require a high read or write data rate in order to get the performance you need from your database, you will ultimately not be happy on AWS. AWS has a single block-device storage mechanism, Elastic Block Storage, which simply does not scale up to very high data rates.

That doesn’t mean that AWS is useless, it just means it isn’t the right tool for every job. The problem arises when AWS is considered the fixed point, like the pickup was the fixed point above. At some point, you have to decide:

  1. That being on AWS is so important (for whatever reason) that you are willing to sacrifice the performance you want; or,
  2. The performance you want is so important that you will need to move off of AWS.

Sadly, even the best of consultants do not have the magic engine in our back room that will cause EBS to perform as well as high-speed direct attached storage.

More soon.


Fixing the WWDC

Well, I’m not going; are you? This year’s Apple World-Wide Developer’s Conference was sold out by 8am Pacific Time, having gone on sale around 6am. (I missed the boat in 2011 and 2010, too.) I can’t imagine anyone except perhaps Apple thinks that the mad scramble to the keyboard that we’ve experienced in the last few years is a rational way to allocate tickets.

It’s time for Apple to admit that the traditional model of a singular WWDC either requires a venue that can handle the crowd, or split it into multiple, regional events. It would lose the “gathering of the tribe” aspect that has always been one of the best parts of WWDC, but that’s lost now, anyway; the “tribe” is not defined on who was by their keyboards for 90 minutes at 6am on a Wednesday.

If Apple views the WWDC as a way to pack people into seats to create excitement for their announcements early in the week, then I suppose the current system is as good as anything. From any other perspective, it’s time to find another way of doing this.

20 April 2012


Two very cool things.

First, read this essay about the disaster that is PHP. Every word is correct.

Then, view this photo set.

18 April 2012


Blah, Blah, Blah, First Half of 2012 Edition

I’ll be speaking at the following conferences through July:

13 April 2012


The Elements of postgresql.conf Style

… or, inexcusable things I am tired of seeing in postgresql.conf files.

Do not mix ‘n’ match override styles.

There are two valid styles for overriding the default values in postgresql.conf: Putting your changes as a cluster at the end, or uncommenting the defaults and overriding in place. Both have advantages and disadvantages. Having some settings one way and some another is pure disadvantage. Do not do this.

Use units.

Quick, what is log_min_duration_statement set to here?

log_min_statement_duration = 2000

Now, what is it set to here?

log_min_statement_duration = 2s

Always use units with numeric values if a unit is available.

Do not remove the default settings.

If you strip out all of the defaults, it becomes impossible to tell what a particular value is set to. Leave the defaults in place, and if you comment out a setting, reset the value to the default (or at least include comments that make it clear what is going on).

Do not leave junk postgresql.conf files scattered around.

If you need to move postgresql.conf (and the other configuration files) to a different location from where the package for your system puts it, don’t leave the old, dead postgresql.conf lying around. Delete any trace of the old installation hierarchy.

Thank you.

12 April 2012


Instagram’s Technology Stack

Instagram has been in the news lately. In this really great post on Tumblr, Instagram talks about its technology stack.

I have some acquaintance with the Instagram people, and they are among the smartest technologists I’ve met. Really nice, too. (Of course, they mention this blog in the post, so I’m biased.)

19 March 2012


A Recipe for Django Transactions on PostgreSQL

As noted before, Django has a lot of facilities for handling transactions, and it’s not at all clear how to use them. In an attempt to cut through the confusion, here’s a recipe for handling transactions sensibly in Django applications on PostgreSQL.

The goals are:

The bits of the recipe are:

The quick reasons behind each step:

This recipe a few other nice features:

xact() also supports the using parameter for multiple databases.

Of course, a few caveats:

To use, just drop the source (one class definition, one function) into a file somewhere in your Django project (such as the omni-present utils application every Django project seems to have), and include it.


from utils.transaction import xact

def my_view_function1(request):
   # Everything here will be in a transaction.
   # It'll roll back if an exception escapes, commits otherwise.

def my_view_function2(request):
   # This stuff won't be in a transaction, so don't modify the database here.
   with xact():
      # This stuff will be, and will commit on normal completion, roll back on a exception

def my_view_function3(request):
   with xact():
      # Modify the database here (let's call it "part 1").
         with xact():
            # Let's call this "part 2."
            # This stuff will be in its own savepoint, and can commit or
            # roll back without losing the whole transaction.
         # Part 2 will be rolled back, but part 1 will still be available to
         # be committed or rolled back.  Of course, if an exception
         # inside the "part 2" block is not caught, both part 2 and
         # part 1 will be rolled back.

The source is available on GitHub. It’s licensed under the PostgreSQL License.

24 January 2012


PostgreSQL Performance When It’s Not Your Job

My presentation from SCALE 10x, “PostgreSQL Performance When It’s Not Your Job” is now available for download.

30 September 2011


“Sharding & IDs at Instagram”

I’d like to recommend an interesting post, “Sharding & IDs at Instagram”, about sharding using Postgres.

« Older Entries

Newer Entries »