1 August 2012
10:22
Amazon has introduced a couple of new I/O-related offerings in AWS, both aimed at addressing the notoriously poor I/O performance of EBS.
The first is the EC2 High I/O Quadruple Extra Large Instance. This is a standard Quad XL instance with two 1TB SSD-backed volumes directly attached to the instance. Although Amazon does not quote I/O performance on this configuration, it should be quite speedy… under good conditions.
Before you race to deploy your database on this configuration, howver, remember:
- You are sharing the physical hardware with other users. You don’t get the SSDs all to yourself. How good your performance will be will depend heavily on the other tenants on the hardware.
- This is ephemeral storage. It does not persist if the instance is shut down, and it can disappear if Amazon reprovisions the hardware. You must set this up with (monitored) streaming replication if you are running PostgreSQL, as you have no strong guarantee as to the integrity of the storage.
- Of course, you pay for it. A High I/O instance is about 72% more than a standard Quad XL instance, based on on-demand pricing.
The next product offering is Provisioned IOPS on EBS. This allows you to guarantee a certain number of I/O operations per second, up to 1,000 IOP/s. This should go a long way towards reducing the uncertainty around EBS, but it also comes with some caveats:
- 1,000 IOP/s is based on 16KB blocks, and decreases as that block size increases. This means that 1,000 IOP/s per second is about 16MB/s. That’s about 1/5th the speed of a 7200 RPM SATA drive. Ths is not, shall we say, super-impressive I/O performance. (You can increase this by striping the EBS volumes, at the cost of losing snapshotting.)
- This costs more, of course. An “EBS-optimized” Quad XL instance is an extra $0.05 per hour. Of course, you pay for the I/O, too.
- The storage is also 25% more than a standard EBS volume.
- This offers no latency guarantees (for a 1,000 IOP/s provisioning, the IOP/s guarantee only applies if your I/O queue length is 5 requests or more, that is to say, saturated).
So, these products are far from useless, but they are incremental, not revolutionary.
30 June 2012
04:12
For years, the standard log analysis tool for PostgreSQL has been pgfouine (For those wondering, a “fouine” in French is a beech marten; as the saying goes, I am none the wiser, if somewhat better informed.) However, pgfouine seems to have stalled as a project; there haven’t been updates in a while, it requires a patch to work with PostgreSQL 9.1, and it frequently chokes on complex or difficult-to-parse log files. And, well, it’s written in PHP.
Thus, I’m pleased to note a new-ish log analyse, pgbadger. It’s written in Perl, at least as fast as pgfouine, and can process log files that pgfouine can’t handle. It can read either CSV or standard log format, and can directly read *.gz files. It also produces a wider range of reports that pgfouine, including some very useful locking reports. I threw 25GB of logs with near 80 million lines at it without it complaining; it processed between 225 and 335 log lines per second on my laptop.
I am not sure why PostgreSQL log analyzers have adopted a small-mammal naming convention, but I’m pleased to have something else burrowing in the garden.
4 June 2012
3 June 2012
23:19
When you are in a business that is engaging in constant warfare with the people who your product is nominally targeted at, you are in a bad business.
18 May 2012
07:08
My presentation from PGCon 2012, PostgreSQL on AWS with Reduced Tears, is now up.
25 April 2012
19:30
Pickup trucks are great.
No, really. They are great vehicles. You can use them for all sorts of really useful things: Bringing your tools out to a construction gig. Delivering refrigerators. Helping your friend move a sofa. Carting away a reasonable amount of construction debris.
But if you need to deliver 75,000 pounds of steel beams to a construction site, in a single run? A pickup truck will not do it. Not even a big pickup. Not even if you add a new engine. Not even if you are willing to get three pickups. You need equipment designed for that. (And, as a note, the equipment that could handle delivering the steel beams would be a terrible choice for helping a friend move their sofa.)
“But,” I hear you say, “I already know how to drive a pickup! And we have a parking space for it. Can’t we just use the pickup? You’re a truck expert; tell us how to get our pickup to pull that load!”
And I say, “Being a truck expert, I will tell you again, a pickup the wrong kind of truck. There are other trucks that will handle that load with no trouble, but a pickup isn’t one of them. The fact that you have a pickup doesn’t make it the right truck.”
We have many clients that run PostgreSQL, happily, on Amazon Web Services.
Some clients, however, are not happy. They are attempting to haul tractor-trailer loads (such as high volume data warehouses) using pickup trucks (Amazon EC2 instances). They wish us to fix their problem, but are not willing to move off of Amazon in order to get the problem fixed.
I like AWS for a lot of things; it has many virtues, which I will discuss in detail soon. However, AWS is not the right solution for every problem. In particular, if you require a high read or write data rate in order to get the performance you need from your database, you will ultimately not be happy on AWS. AWS has a single block-device storage mechanism, Elastic Block Storage, which simply does not scale up to very high data rates.
That doesn’t mean that AWS is useless, it just means it isn’t the right tool for every job. The problem arises when AWS is considered the fixed point, like the pickup was the fixed point above. At some point, you have to decide:
- That being on AWS is so important (for whatever reason) that you are willing to sacrifice the performance you want; or,
- The performance you want is so important that you will need to move off of AWS.
Sadly, even the best of consultants do not have the magic engine in our back room that will cause EBS to perform as well as high-speed direct attached storage.
More soon.
09:01
Well, I’m not going; are you? This year’s Apple World-Wide Developer’s Conference was sold out by 8am Pacific Time, having gone on sale around 6am. (I missed the boat in 2011 and 2010, too.) I can’t imagine anyone except perhaps Apple thinks that the mad scramble to the keyboard that we’ve experienced in the last few years is a rational way to allocate tickets.
It’s time for Apple to admit that the traditional model of a singular WWDC either requires a venue that can handle the crowd, or split it into multiple, regional events. It would lose the “gathering of the tribe” aspect that has always been one of the best parts of WWDC, but that’s lost now, anyway; the “tribe” is not defined on who was by their keyboards for 90 minutes at 6am on a Wednesday.
If Apple views the WWDC as a way to pack people into seats to create excitement for their announcements early in the week, then I suppose the current system is as good as anything. From any other perspective, it’s time to find another way of doing this.
20 April 2012
09:37
First, read this essay about the disaster that is PHP. Every word is correct.
Then, view this photo set.
18 April 2012
23:55
I’ll be speaking at the following conferences through July:
- PGCon, Ottawa, Ontario, Canada, May 17-18.
- DjangoCon Europe, Zurich, Switzerland, June 4-6.
- Southwest LinuxFest, Charlotte, North Carolina, June 8-10.
- EuroPython, Florence, Italy, July 2-8.
- OSCON, Portland, Oregon, USA, July 16-20.
13 April 2012
08:00
… or, inexcusable things I am tired of seeing in postgresql.conf files.
Do not mix ‘n’ match override styles.
There are two valid styles for overriding the default values in postgresql.conf: Putting your changes as a cluster at the end, or uncommenting the defaults and overriding in place. Both have advantages and disadvantages. Having some settings one way and some another is pure disadvantage. Do not do this.
Use units.
Quick, what is log_min_duration_statement
set to here?
log_min_statement_duration = 2000
Now, what is it set to here?
log_min_statement_duration = 2s
Always use units with numeric values if a unit is available.
Do not remove the default settings.
If you strip out all of the defaults, it becomes impossible to tell what a particular value is set to. Leave the defaults in place, and if you comment out a setting, reset the value to the default (or at least include comments that make it clear what is going on).
Do not leave junk postgresql.conf files scattered around.
If you need to move postgresql.conf (and the other configuration files) to a different location from where the package for your system puts it, don’t leave the old, dead postgresql.conf lying around. Delete any trace of the old installation hierarchy.
Thank you.