Thank you John,
Your interesting answer make me think that's true "there are no stupid question" :)... I was bitten because, I finally decided to test-model my code and was surprised by the failure I got, so I had a closer look to what's happening in my model. I'm basically modeling a weight on subsets of a set, so naturally 1 to 0 by 0.1... So it's not really a problem in my case, anyway this small incursion in the Float world was interesting. Today, I learn to things: - I will avoide Float as much as possible :) - Unit Test are cool :)
Part of the issue is that different architectures will have different values for machine epsilon (1e-09 in your post). Machine epsilon is defined to be the largest float such that
0.0 + epsilon = 0.0
In Float, there is an class var named Epsilon (1.0e-12 in my image). Method initialize in the class side of Float is interesting by the way... So Epsilon := 0.000000000001. "Defines precision of mathematical functions" is set when the class is inittialized.
Just for information, is it possible to set this value at image startup depending on the architecture ?
Your idea of redefining equality for floats is a very bad idea because (1) it assumes all hardware has the same value for machine epsilon and (2) will break 50 years worth of programs that do numerical computation.
Oups so I'll avoid that :)
I just thought as = is different from ==, it might be "possible" to redefine it so that 1 - 0.1 - 0.1 -0.1 = 0.7
Maybe the problem was not the "=" part, but how the 0.1 was evaluated. When I type or enter "0.1", I mean a scaled decimal (fraction + scale) but not a "float"... I expect that a precise value would result in a precise one. Just thinking loud but couldn't 0.1 (maybe decimals untill 0.0001) be evaluated as a precise value (as 1/3 evaluate as a fraction) ?
It's probably becoming too pointless and I imagine Float are more efficient so... Anyway, thanks again :)
Cédrick