Estimates "stuck" at 200 rows?
So, what’s weird about this plan, from a query on a partitioned table? (PostgreSQL 9.3, in this case.)
So, what’s weird about this plan, from a query on a partitioned table? (PostgreSQL 9.3, in this case.)
I love Django a lot; it still surprises me how productive I can be in it. And, especially in more recent versions, the error handling for weird configuration problems is much, much better than it used to be.
But sometimes, you get an error whose origin is slightly mysterious. Thus, it can be helpful to have a log of
On a PostgreSQL primary / secondary pair, it’s very important to monitor replication lag. Increasing replication lag is often the first sign of trouble, such as a network issue, the secondary disconnecting for some reason (or for no reason at all, which does happen rarely), disk space issues, etc.
The slides for my talk, Securing PostgreSQL at PGConf EU 2016 are now available.
The slides for my presentation Unclogging the VACUUM at PGConf EU in Tallinn, Estonia are now available.
Everyone has their own style, unfortunately, on how they edit postgresql.conf. Some like to uncomment specific values and edit them at the point they appear in the default file, some like to tack overrides onto the end… and some do a mixture of those (don’t do that).
My personal preference is to leave everything in the default file commented,
I’ve noticed an increasing tendency in PostgreSQL users to over-index tables, often constructing very complex partial indexes to try to speed up very particular queries.
Be careful about doing this. Not only do additional indexes increase the plan time, they greatly increase insert time.
By way of example, I created a table with a single bigint column, and
I’ll be speaking about Django and PostgreSQL at PyCon US 2016.