00:00

# How slow is DECIMAL, anyway?

## 15 January 2023

In PostgreSQL, `NUMERIC`

is a variable length type of fixed precision. You can have as many digits as you want (and you want to pay the storage for). `DOUBLE PRECISION`

is a floating point type, with variable precision.

Sometimes, the question comes up: How much slower is `NUMERIC`

than `DOUBLE PRECISION`

, anyway?

Here’s a quick, highly unscientific benchmark:

Doing a simple test (100 million rows), a straight `SUM()`

across a `NUMERIC`

was about 2.2x slower than `OUBLE PRECISION`

. It went up to about 4x slower if there was a simple calculation, `SUM(n*12)`

. It was about 5x slower if the calculation involved the same type, `SUM(n*n)`

. Of course, these are just on my laptop, but I would expect that the ratios would remain constant on other machines.

Inserting the 100 million rows took 72.2 seconds for `DOUBLE PRECISION`

, 146.2 seconds for `NUMERIC`

. The resulting table size was 3.5GB for `DOUBLE PRECISION`

, 4.2GB for `NUMERIC`

.

So, yes, `NUMERIC`

is slower. But it’s not *absurdly* slower. `NUMERIC`

is *much* slower than `bigint`

(exercise left to the reader), so using `NUMERIC`

for things like primary keys is definitely not a good idea.

There are no comments yet.