One common source of query problems in PostgreSQL results an unexpectedly-bad query plan when a LIMIT clause is included in a query. The typical symptom is that PostgreSQL picks an index-based plan that actually takes much, much longer than if a different index, or no index at all, had been used.

Here’s an example. First, we create a simple table and an index on it:

1xof=# CREATE TABLE sample (
2xof(# i INTEGER,
3xof(# f FLOAT
4xof(# );
5CREATE TABLE
6xof=# CREATE INDEX ON sample(f);
7CREATE INDEX

And fill it with some data:

1xof=# INSERT INTO sample SELECT 0, random() FROM generate_series(1, 10000000);
2INSERT 0 10000000
3xof=# ANALYZE;
4ANALYZE

Then, for about 5% of the table, we set i to 1:

1xof=# UPDATE sample SET i=1 WHERE f<0.05;
2UPDATE 499607
3xof=# ANALYZE;
4ANALYZE

Now, let’s find all of the entires where i is 1, in descending order of f.

1xof=# EXPLAIN ANALYZE SELECT * FROM sample WHERE i=1 ORDER BY f DESC;
2 QUERY PLAN
3----------------------------------------------------------------------------------------------------------------------------
4 Sort (cost=399309.76..401406.04 rows=838509 width=12) (actual time=1415.166..1511.202 rows=499607 loops=1)
5 Sort Key: f
6 Sort Method: quicksort Memory: 35708kB
7 -> Seq Scan on sample (cost=0.00..316811.10 rows=838509 width=12) (actual time=1101.836..1173.262 rows=499607 loops=1)
8 Filter: (i = 1)
9 Rows Removed by Filter: 9500393
10 Total runtime: 1542.529 ms
11(7 rows)

So, 1.5 seconds to do a sequential scan on the whole table. So, just getting the first 10 entries from that should be much faster, right?

1xof=# EXPLAIN ANALYZE SELECT * FROM sample WHERE i=1 ORDER BY f DESC LIMIT 10;
2 QUERY PLAN
3----------------------------------------------------------------------------------------------------------------------------------------------------------
4 Limit (cost=0.43..277.33 rows=10 width=12) (actual time=12710.612..12710.685 rows=10 loops=1)
5 -> Index Scan Backward using sample_f_idx on sample (cost=0.43..23218083.52 rows=838509 width=12) (actual time=12710.610..12710.682 rows=10 loops=1)
6 Filter: (i = 1)
7 Rows Removed by Filter: 9500393
8 Total runtime: 12710.714 ms
9(5 rows)

Oh. 12.7 seconds. What happened?

PostgreSQL doesn’t keep correlated statistics about columns; each column’s statistics are kept independently. Thus, PostgreSQL made an assumption about the distribution of values of i in the table: they were scattered more or less evenly throughout. Thus, walking the index backwards meant that, to get 10 “hits,” it would have to scan about 100 index entries… and the index scan would be a big win.

It was wrong, however, because all of the i=1 values were clustered right at the beginning. If we reverse the order of the scan, we can see that was a much more efficient plan:

1xof=# EXPLAIN ANALYZE SELECT * FROM sample WHERE i=1 ORDER BY f LIMIT 10;
2 QUERY PLAN
3-----------------------------------------------------------------------------------------------------------------------------------------
4 Limit (cost=0.43..277.33 rows=10 width=12) (actual time=0.029..0.046 rows=10 loops=1)
5 -> Index Scan using sample_f_idx on sample (cost=0.43..23218083.52 rows=838509 width=12) (actual time=0.027..0.044 rows=10 loops=1)
6 Filter: (i = 1)
7 Total runtime: 0.071 ms
8(4 rows)

So, what do we do? There’s no way of telling PostgreSQL directly to pick one plan over the other. We could just turn off index scans for the query, but that could well have bad side effects.

In this particular case, where a predicate (like the WHERE i=1) picks up a relatively small number of entries, we can use a Common Table Expression, or CTE. Here’s the example rewritten using a CTE:

1xof=# EXPLAIN ANALYZE
2xof-# WITH inner_query AS (
3xof(# SELECT * FROM sample WHERE i=1
4xof(# )
5xof-# SELECT * FROM inner_query ORDER BY f LIMIT 10;
6 QUERY PLAN
7--------------------------------------------------------------------------------------------------------------------------------------
8 Limit (cost=351701.16..351701.18 rows=10 width=12) (actual time=1371.946..1371.949 rows=10 loops=1)
9 CTE inner_query
10 -> Seq Scan on sample (cost=0.00..316811.10 rows=838509 width=12) (actual time=1168.034..1244.785 rows=499607 loops=1)
11 Filter: (i = 1)
12 Rows Removed by Filter: 9500393
13 -> Sort (cost=34890.06..36986.33 rows=838509 width=12) (actual time=1371.944..1371.944 rows=10 loops=1)
14 Sort Key: inner_query.f
15 Sort Method: top-N heapsort Memory: 25kB
16 -> CTE Scan on inner_query (cost=0.00..16770.18 rows=838509 width=12) (actual time=1168.040..1325.496 rows=499607 loops=1)
17 Total runtime: 1381.472 ms
18(10 rows)

A CTE is an “optimization fence”: The planner is prohibited from pushing the ORDER BY or LIMIT down into the CTE. In this case, that means that it is also prohibited from picking the index scan, and we’re back to the sequential scan.

So, when you see a query come completely apart, and it has a LIMIT clause, check to see if PostgreSQL is guessing wrong about the distribution of data. If the total number of hits before the LIMIT are relatively small, you can often use a CTE to isolate that part, and only apply the LIMIT thereafter. (Of course, you might be better off just doing the LIMIT operation in your application!)