This is another reminder that floating point numbers are just a hack that should never be used by default. We have enough CPU, memory and bandwidth that we are able to transmit/store exact representation of numeric user input, and convert it to float only when necessary, as designed by programmer.
Indeed. Just like 32-bit integers, having fixed size floats as the default representation of numbers with a decimal component is a bad holdover from days of limited hardware.
Let programmers use floats when they have the performance analysis to justify it. Before then, it's just another kind of premature optimisation - and high level languages should be avoiding it.
Ideally, to maintain maximal correctness, it would be in a symbolic representation until an inexact rendering was called for, but simply resorting to floats for irrationals is not entirely unreasonable though it's still a premature optimization, but not as bad as resorting to it for rationals.
Maintaining symbolic values could quickly blow up. For example when using iterative methods. Are you aware of any languages doing symbolic representation with standard types?
I don't see how irrationals as floats is premature optimization. If you already know symbolic representation is going to blow up quickly and cause downstream headaches the measures you take are not premature. Though now I do wonder how the headaches stack up against the float headaches :-)
Don't get me wrong I like the idea of symbolic representation. It's just that in everyday use it seems very impractical to me. Granted, limited forms of it like the fractional type don't have the complexity problem. But in many cases things devolve to floats quickly anyway. The people that care can use libraries and deal with the complexity.
> Maintaining symbolic values could quickly blow up.
It could, in certain circumstances.
> I don't see how irrationals as floats is premature optimization.
It is, when it is because “could” and not because “does”.
> If you already know symbolic representation is going to blow up quickly and cause downstream headaches the measures you take are not premature
Sure, if you know that's going to happen. When you do it because it might happen, or because it happens to be the language’s default representation of irrational (or even exact decimal, or in JS’s case exact integer) numbers, that's a different story.
So can you name a language that does symbolic representation? It sounds like one of those things which are great in theory but hellish in implementation.
"could" is the operative word. How many programs do enough calculation to make the tradeoffs of floats worthwhile? Conversely, how many programs need correctness more, and get caught out by float gotcha?
This is the definition of premature optimisation - not knowing the impact, you're suggesting a performance change anyways. The point is, for programmers that don't care, inefficiency is good enough.
It would be cool if more languages had support for fixed-point fractional numbers. Rolling your own requires re-implementing display and multiplication code, which is a decent reason for people to just use floats instead.