postgresql when it's not your job

10:00

Estimates “stuck” at 200 rows?

4 January 2017

So, what’s weird about this plan, from a query on a partitioned table? (PostgreSQL 9.3, in this case.)

test=> explain select distinct id from orders where order_timestamp > '2016-05-01';
                                                                  QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=347341.56..347343.56 rows=200 width=10)
   Group Key: orders.id
   ->  Append  (cost=0.00..337096.10 rows=4098183 width=10)
         ->  Seq Scan on orders  (cost=0.00..0.00 rows=1 width=178)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Index Scan using orders_20160425_order_timestamp_idx on orders_20160425  (cost=0.43..10612.30 rows=120838 width=10)
               Index Cond: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Seq Scan on orders_20160502  (cost=0.00..80539.89 rows=979431 width=10)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Seq Scan on orders_20160509  (cost=0.00..74780.41 rows=909873 width=10)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Seq Scan on orders_20160516  (cost=0.00..68982.25 rows=845620 width=10)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Seq Scan on orders_20160523  (cost=0.00..65777.68 rows=796054 width=10)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
         ->  Seq Scan on orders_20160530  (cost=0.00..36403.57 rows=446366 width=10)
               Filter: (order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone)
(17 rows)

That estimate on the HashAggregate certainly looks wonky, doesn’t it? Just 200 rows even with a huge number of rows below it?

What if we cut down the number of partitions being hit?

test=> explain select distinct id from orders where order_timestamp > '2016-05-01' and order_timestamp < '2016-05-15';
                                                                                    QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=169701.02..169703.02 rows=200 width=10)
   Group Key: orders.id
   ->  Append  (cost=0.00..165026.92 rows=1869642 width=10)
         ->  Seq Scan on orders  (cost=0.00..0.00 rows=1 width=178)
               Filter: ((order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
         ->  Index Scan using orders_20160425_order_timestamp_idx on orders_20160425  (cost=0.43..10914.39 rows=120838 width=10)
               Index Cond: ((order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
         ->  Seq Scan on orders_20160502  (cost=0.00..82988.46 rows=979431 width=10)
               Filter: ((order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
         ->  Index Scan using orders_20160509_order_timestamp_idx on orders_20160509  (cost=0.42..71124.06 rows=769372 width=10)
               Index Cond: ((order_timestamp > '2016-05-01 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
(11 rows)

Still 200 exactly. OK, that’s bizarre. Let’s select exactly one partition:

test=> explain select distinct id from orders where order_timestamp > '2016-05-14' and order_timestamp < '2016-05-15';
                                                                                    QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=14669.26..14671.26 rows=200 width=10)
   Group Key: orders.id
   ->  Append  (cost=0.00..14283.05 rows=154481 width=10)
         ->  Seq Scan on orders  (cost=0.00..0.00 rows=1 width=178)
               Filter: ((order_timestamp > '2016-05-14 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
         ->  Index Scan using orders_20160509_order_timestamp_idx on orders_20160509  (cost=0.42..14283.05 rows=154480 width=10)
               Index Cond: ((order_timestamp > '2016-05-14 00:00:00'::timestamp without time zone) AND (order_timestamp < '2016-05-15 00:00:00'::timestamp without time zone))
(7 rows)

Still 200 exactly. What happens if we select from the child directly?

test=> explain select distinct id from orders_20160509;
                                     QUERY PLAN
-------------------------------------------------------------------------------------
 HashAggregate  (cost=74780.41..75059.51 rows=27910 width=10)
   Group Key: id
   ->  Seq Scan on orders_20160509  (cost=0.00..72505.73 rows=909873 width=10)
(3 rows)

A much more reasonable estimate. So, what’s going on?

That 200 should be something of a flag, as that’s a compiled-in constant that PostgreSQL uses when it doesn’t have ndistinct information for a particular table, usually because there are no statistics collected on a table.

In this case, the issue was that an ANALYZE had never been done on the parent table. This isn’t surprising: Autovacuum would never hit that table, since (like most parent tables in a partition set), it has no rows and is never updated or inserted to. That lack-of-information gets passed up through the Append node, and the HashAggregate just uses the default 200.

Sure enough, when the parent table was ANALYZE’d, the estimates became much more reasonable.

So: It can pay to do an initial ANALYZE on a newly created partitioned table so that the planner gets statistics for the parent table, even if those statistics are “no rows here.”

Comments are closed.