I tweeted about some upcoming viz changes in Rill Developer, which led to some back-and-forth with Dominik Moritz of CMU / Apple about how to deal with precision in dynamic contexts, & he responded with a pretty nice heuristic. I decided to write it up with his permission.

**The problem**: when you’re trying to figure out what precision to use for a number formatter but
have little idea of what you’re formatting or what other numbers might be visible in the application,
how do you determine how much precision to show?
This is especially problematic for Rill Developer,
which – when used as a diagnostic tool – you often have
no information about the context, just a column type and a column of numbers.

If the integer portion of your number is large, decimals don’t usually make a big difference. That’s the basis of Dominik’s format precision heuristic:

*look at the number of significant digits,**keep everything before the ’.’ (the integer portion),**subtract five from the number of integers in the integer portion, and keep that as the precision.*

So you’d get formatted numbers like `12345`

, `1234.5`

, `123.45`

, `12.345`

. This can be easily implemented with `d3-format`

:

```
const justEnoughPrecision = (f) => format(
f > 10 ** 4 ? "d" :
f > 1 ? ".5g" : ".4f")
```

The reasoning: *“I wanted integers to be shown completely
(because we already use the digits so might as well) and show floats such
that the format gives you enough resolution to distinguish numbers when they should be distinguishable.”*

This is a nice heuristic that feels pretty obvious in hindsight, but is actually quite elegant. It excels in contexts where you’re formatting in isolation.

The main tradeoff: if you have more data available to you, you’d probably do better taking into account the full range of numbers to generate a formatter. More on that in another post.

Explained via Discord DMs.