Andres Valloud wrote:
IMHO... sometimes it may be convenient to have 2 = 2.0, but since they represent different intentions, I would not expect that to happen.
What about magnitude comparisons (<, >, <=, >=)? For example when using (perfectly well-defined) binary search algorithms on mixed number representations it may be more than merely convenient to have 2 = 2.0.
Cheers, - Andreas
Why should a perfectly specified integer be equal to, possibly, the result of carrying calculations without infinite precision? What would it mean if theGreatDoublePrecisionResult = 2? Wouldn't it be more interesting (and precise) to ask laysWithin: epsilon from: 2?
In other words, comparing constants makes it look much simpler than what it really is. Reading something like anInteger = 2.0 would be, from my point of view, highly questionable because it is an assertion that an approximation has an *exact* value. Nonsense.
From a more pragmatic point of view, there is also the issue of 2 =
2.0, but things like
(1 bitShift: 1000) - 1
cannot be equal to any floating point number supported by common hardware. Thus, exploiting anInteger = aFloat is intention obscuring by definition since it may or may not work depending on the integer. Again, highly questionable.
In short: floating point numbers *may* be equal to integers, but the behavior of #= cannot be determined a priori. Since the behavior of #= does not imply a relationship of equivalence, the behavior of #hash is inconsequential.
Adding integers and floats to a set gets messed up. So we have two options... a) don't do it because it is intention obscuring... or b) make integers never be equal to floating point numbers.
Same deal with fractions and the like.