23 June 2013
I often find that I’m in the middle of a loop or something and discover an error. I want to exit the loop in a way that causes the database work I’ve done within it to be rolled back, but I don’t want that exception to propagate further.
This usually looks like:
for thing in things:
Having noticed this pattern a lot, I’ve added it as a feature to
Xact. Xact defines a
Rollback exception. It processes it like any other exception, but then swallows it and normally exits the function or with clause. If you feel motivated, you can subclass
Rollback, although the utility of that escapes me at the moment.
When Django 1.6 is released,
Xact will be deprecated in favor of new functionality there… but for now, have fun with it!
15 May 2013
The Call for Papers for DjangoCon US 2013 is now open.
7 April 2013
psycopg2, the Python PostgreSQL interface library, is now up to version 2.5. This includes built-in support for the JSON and range types… yay!
13 March 2013
tl;dr: Don’t give tables the same name as base PostgreSQL types, even though it will let you.
It’s interesting how synchronicity can occur. In my talk about custom PostgreSQL types in Python, I mentioned that any time you create a table in PostgreSQL, you’re also creating a type: the row type of the table.
While I was presenting the talk, a client sent me email wondering why a
pg_restore of an expression index was failing, because the return type
text was not the same as
pg_catalog.text. OK, that’s strange!
What had happened is that the database has a table with the name text, which PostgreSQL will happily let you do:
postgres=# CREATE TABLE text (
postgres(# i INTEGER
And both types appear in
postgres=# SELECT typname, typnamespace FROM pg_type WHERE typname='text';
typname | typnamespace
text | 11
text | 2200
Needless to say, this isn’t a great idea, because although PostgreSQL seems to keep them straight most of the time, there are times (like a
pg_restore processing an expression index) that it can get confused.
So, don’t do this.
The slides for my presentation, “PostgreSQL, Python and Squid” (otherwise known as, “using Python in PostgreSQL and PostgreSQL from Python”) presented at PyPgDay 2013 at PyCon 2013, are available for download.
10 March 2013
tl;dr: Each and every tablespace is critical to the operation of your PostgreSQL database. If you lose one, you’ve lost the entire database.
This one can be short and sweet: If you use tablespaces in PostgreSQL, each and every one of them is a critical part of your database. If you lose one, your database is corrupted, probably irretrievably. Never, ever think that if you lose a tablespace, you’ve just lost the data in the tables and indexes on the tablespace; you’ve lost the whole database.
In a couple of cases, clients have had what they thought were clever uses of tablespaces. In each case, they could have lead to disaster:
“We’ll keep cached and recent data on a tablespace on AWS instance storage on SSDs, and the main database on EBS. If the instance storage goes away, we’ll just recreate that data from the main database.”
“We’ll keep old historical data on a SAN, and more recent data on local storage.”
In each case, there was the assumption that if the tablespace was lost, the rest of the database would be intact. This assumption is false.
Each and every tablespace is a part of your database. They are the limbs of your database. PostgreSQL is an elephant, not a lizard; it won’t regrow a limb it has to leave behind. Don’t treat any tablespace as disposable!
4 October 2012
A client of ours recently had me log into their server to set up a tablespace scheme for them. While I was in, I noticed that the secondary of the streaming replication pair wasn’t connecting to the primary. A quick check showed that the primary had been moved from one internal IP address to another, and in doing so everything had been updated except the pg_hba.conf file… so the secondary wasn’t able to connect.
This had happened several weeks prior.
The good news is that in addition to streaming replication, we had set up WAL archiving from the primary to the secondary, so the secondary was staying up to date using the WAL segments. We didn’t have to reimage the secondary; just fixing the pghba.conf and reloading the primary fixed the problem. Thanks to pgarchivecleanup, neither side was building up WAL segments.
There are several good reasons for including WAL archiving in your streaming replication setup. This kind of accidental problem is one of them.
4 September 2012
1 August 2012
Amazon has introduced a couple of new I/O-related offerings in AWS, both aimed at addressing the notoriously poor I/O performance of EBS.
The first is the EC2 High I/O Quadruple Extra Large Instance. This is a standard Quad XL instance with two 1TB SSD-backed volumes directly attached to the instance. Although Amazon does not quote I/O performance on this configuration, it should be quite speedy… under good conditions.
Before you race to deploy your database on this configuration, howver, remember:
- You are sharing the physical hardware with other users. You don’t get the SSDs all to yourself. How good your performance will be will depend heavily on the other tenants on the hardware.
- This is ephemeral storage. It does not persist if the instance is shut down, and it can disappear if Amazon reprovisions the hardware. You must set this up with (monitored) streaming replication if you are running PostgreSQL, as you have no strong guarantee as to the integrity of the storage.
- Of course, you pay for it. A High I/O instance is about 72% more than a standard Quad XL instance, based on on-demand pricing.
The next product offering is Provisioned IOPS on EBS. This allows you to guarantee a certain number of I/O operations per second, up to 1,000 IOP/s. This should go a long way towards reducing the uncertainty around EBS, but it also comes with some caveats:
- 1,000 IOP/s is based on 16KB blocks, and decreases as that block size increases. This means that 1,000 IOP/s per second is about 16MB/s. That’s about 1/5th the speed of a 7200 RPM SATA drive. Ths is not, shall we say, super-impressive I/O performance. (You can increase this by striping the EBS volumes, at the cost of losing snapshotting.)
- This costs more, of course. An “EBS-optimized” Quad XL instance is an extra $0.05 per hour. Of course, you pay for the I/O, too.
- The storage is also 25% more than a standard EBS volume.
- This offers no latency guarantees (for a 1,000 IOP/s provisioning, the IOP/s guarantee only applies if your I/O queue length is 5 requests or more, that is to say, saturated).
So, these products are far from useless, but they are incremental, not revolutionary.
30 June 2012
For years, the standard log analysis tool for PostgreSQL has been pgfouine (For those wondering, a “fouine” in French is a beech marten; as the saying goes, I am none the wiser, if somewhat better informed.) However, pgfouine seems to have stalled as a project; there haven’t been updates in a while, it requires a patch to work with PostgreSQL 9.1, and it frequently chokes on complex or difficult-to-parse log files. And, well, it’s written in PHP.
Thus, I’m pleased to note a new-ish log analyse, pgbadger. It’s written in Perl, at least as fast as pgfouine, and can process log files that pgfouine can’t handle. It can read either CSV or standard log format, and can directly read *.gz files. It also produces a wider range of reports that pgfouine, including some very useful locking reports. I threw 25GB of logs with near 80 million lines at it without it complaining; it processed between 225 and 335 log lines per second on my laptop.
I am not sure why PostgreSQL log analyzers have adopted a small-mammal naming convention, but I’m pleased to have something else burrowing in the garden.