postgresql when it's not your job

13:50

The Multi-Column Index of the Mysteries

30 December 2016

The one thing that everyone knows about compositive indexes is: If you have an index on (A, B, C), it can’t be used for queries on (B) or (B, C) or (C), just (A), (A, B) or (A, B, C), right? I’ve said that multiple times in talks. It’s clearly true, right?

Well, no, it’s not. It’s one of those things that is not technically true, but it is still good advice.

The documentation on multi-column indexes is pretty clear:

A multicolumn B-tree index can be used with query conditions that involve any subset of the index’s columns, but the index is most efficient when there are constraints on the leading (leftmost) columns. The exact rule is that equality constraints on leading columns, plus any inequality constraints on the first column that does not have an equality constraint, will be used to limit the portion of the index that is scanned.

Let’s try this out!

First, create a table and index:

xof=# CREATE TABLE x ( 
xof(#     i integer,
xof(#     f float,
xof(#     g float
xof(# );
CREATE TABLE
xof=# CREATE INDEX ON x(i, f, g);
CREATE INDEX

And fill it with some test data:

xof=# INSERT INTO x SELECT 1, random(), random() FROM generate_series(1, 10000000);
INSERT 0 10000000
xof=# INSERT INTO x SELECT 2, random(), random() FROM generate_series(1, 10000000);
INSERT 0 10000000
xof=# INSERT INTO x SELECT 3, random(), random() FROM generate_series(1, 10000000);
INSERT 0 10000000
xof=# ANALYZE x;
ANALYZE

And away we go!

xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                                   QUERY PLAN                                                                   
------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=599859.50..599859.51 rows=1 width=8) (actual time=91876.057..91876.057 rows=1 loops=1)
   ->  Index Only Scan using x_i_f_g_idx on x  (cost=0.56..599097.71 rows=304716 width=8) (actual time=1820.699..91652.409 rows=300183 loops=1)
         Index Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
         Heap Fetches: 300183
 Planning time: 3.384 ms
 Execution time: 91876.165 ms
(6 rows)

And sure enough, it uses the index, even though we didn’t include column i in the query. In this case, the planner thinks that this will be more efficient than just doing a sequential scan on the whole table, even though it has to walk the whole index.

Is it right? Let’s turn off index scans and find out.

xof=# SET enable_indexonlyscan = 'off';
SET
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=599859.50..599859.51 rows=1 width=8) (actual time=39691.081..39691.081 rows=1 loops=1)
   ->  Index Scan using x_i_f_g_idx on x  (cost=0.56..599097.71 rows=304716 width=8) (actual time=1820.676..39624.144 rows=300183 loops=1)
         Index Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
 Planning time: 0.181 ms
 Execution time: 39691.128 ms
(5 rows)

PostgreSQL, you’re not helping!

xof=# SET enable_indexscan = 'off';
SET
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                                   QUERY PLAN                                                                    
-------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=689299.60..689299.61 rows=1 width=8) (actual time=40593.427..40593.428 rows=1 loops=1)
   ->  Bitmap Heap Scan on x  (cost=513444.70..688537.81 rows=304716 width=8) (actual time=37901.773..40542.900 rows=300183 loops=1)
         Recheck Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
         Rows Removed by Index Recheck: 8269763
         Heap Blocks: exact=98341 lossy=53355
         ->  Bitmap Index Scan on x_i_f_g_idx  (cost=0.00..513368.52 rows=304716 width=0) (actual time=37860.366..37860.366 rows=300183 loops=1)
               Index Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
 Planning time: 0.160 ms
 Execution time: 40593.764 ms
(9 rows)

Ugh, fine!

xof=# SET enable_bitmapscan='off';
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                     QUERY PLAN                                                     
--------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=641836.33..641836.34 rows=1 width=8) (actual time=27270.666..27270.666 rows=1 loops=1)
   ->  Seq Scan on x  (cost=0.00..641074.54 rows=304716 width=8) (actual time=0.081..27195.552 rows=300183 loops=1)
         Filter: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
         Rows Removed by Filter: 29699817
 Planning time: 0.157 ms
 Execution time: 27270.726 ms
(6 rows)

It turns out the seq scan is faster, which isn’t that much of a surprise. Of course, what’s really fast is using the index properly:

xof=# // reset all query planner settings
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE i IN (1, 2, 3) AND f BETWEEN 0.11 AND 0.12;
                                                                QUERY PLAN                                                                 
-------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=92459.82..92459.83 rows=1 width=8) (actual time=6283.162..6283.162 rows=1 loops=1)
   ->  Index Only Scan using x_i_f_g_idx on x  (cost=0.56..91698.03 rows=304716 width=8) (actual time=1.295..6198.409 rows=300183 loops=1)
         Index Cond: ((i = ANY ('{1,2,3}'::integer[])) AND (f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
         Heap Fetches: 300183
 Planning time: 1.264 ms
 Execution time: 6283.567 ms
(6 rows)

And, of course, a dedicated index for that particular operation is the fastest of all:

xof=# CREATE INDEX ON x(f);
CREATE INDEX
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                              QUERY PLAN                                                               
---------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=188492.00..188492.01 rows=1 width=8) (actual time=5536.940..5536.940 rows=1 loops=1)
   ->  Bitmap Heap Scan on x  (cost=4404.99..187662.16 rows=331934 width=8) (actual time=209.854..5466.633 rows=300183 loops=1)
         Recheck Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
         Rows Removed by Index Recheck: 8258716
         Heap Blocks: exact=98337 lossy=53359
         ->  Bitmap Index Scan on x_f_idx  (cost=0.00..4322.00 rows=331934 width=0) (actual time=163.402..163.402 rows=300183 loops=1)
               Index Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
 Planning time: 5.586 ms
 Execution time: 5537.235 ms
(9 rows)

Although, interestingly enough, PostgreSQL doesn’t quite get it right here:

xof=# SET enable_bitmapscan='off';
SET
xof=# EXPLAIN ANALYZE SELECT SUM(g) FROM x WHERE f BETWEEN 0.11 AND 0.12;
                                                            QUERY PLAN                                                             
-----------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=203875.29..203875.30 rows=1 width=8) (actual time=2178.215..2178.216 rows=1 loops=1)
   ->  Index Scan using x_f_idx on x  (cost=0.56..203045.45 rows=331934 width=8) (actual time=0.161..2110.903 rows=300183 loops=1)
         Index Cond: ((f >= '0.11'::double precision) AND (f <= '0.12'::double precision))
 Planning time: 0.170 ms
 Execution time: 2178.279 ms
(5 rows)

So, we conclude:

And there we are.

xof=# DROP TABLE x;
DROP TABLE

Alexey Bashtanov at 10:36, 11 January 2017:

Hello Christophe,

Thanks for the article, it was very interesting.

Would you mind me asking you a few questions?
1) Your index scan (the second explain analyze in your listings) works twice faster than the equivalent index-only scan (the first explain analyze in your listings), which is strange. I suppose it to be a disk cache issue. Have you executed your statements repeatedly or just logged the first execution timing? A simple rule of thumb I have heard of is to execute three times and use the smallest figure. It should be enough to mitigate cache issues and also to keep unaffected by other processes.
2) Also your index-only scan degenerates to plain index scan (note these heap fetches in the plan), probably because the visibility map of table x is not up-to-date. Unfortunately it is not updated by ANALYZE, you need to run VACUUM manually. Autovacuum does not track this issue, as it is triggered after certain number of updates/deletes, but visibility map needs to be updated after inserts as well. So if your visibility map was up-to-date, your table index-only scan might behave much faster.
3) You blame postgres for incorrect choice between seqscan, bitmap scan and index scan. Have you tried tuning planner constants such as cpu_index_tuple_cost or random_page_cost to your hardware?
4) Do you know why is it that slow to use the an index without proper condition on first columns? EXPLAIN (analyze, buffers) shows as many blocks read, as the full index scan used to sort the whole table …
5) Have you tried setting up an index on (f,g) and allowing proper index-only scans by running VACUUM? I get 2x speedup compared to index scan using index on (f) only on spinning disk and 10x on SSD.

Best Regards,
Alexey Bashtanov

Xof at 11:13, 11 January 2017:

Hi, Alexey,

Those details were out of scope for this particular entry. There’s been some interest in proper performance metrics, and I’ll do those as a separate post.