# [Newbies] Re: is 0.1 a Float or a ScaledDecimal ?

johnps11 at bigpond.com johnps11 at bigpond.com
Thu Feb 21 01:32:51 UTC 2008

```>> I don't know what you are working on but if you use fractions for what
>> you are
>> doing it would be interesting to hear about how you use them and your
>> results.
>> Because fractions are kept as an integer numerator and an integer
>> denominator,
>> they probably take up more memory than floats but less than
>> ScaledDecimals.  And
>> because both the numerator and denominator can be large integers,
>> fractions can
>> be very precise.  But because they are implemented in both software and
>> hardware, then can be slow.  However, many numerical functions seem to
>> have
>> fractions or divisions built into them.  If fractions are used and the
>> divisions
>> not preformed until they absolutely have to be, then use of fractions
>> could be
>> faster than expected and may be faster than ScaledDecimals or floats.  I
>> have no
>> proof of this but if you do anything in this area, I would love to hear
>> it.
>
> I think all this is premature optimization for me :) as I'm only
> building an early prototype (I'm doing a start of Dempster Shafer
> Theory [1] implementation (actually Transferable Belief Model)... and
> it's won't reach a big size for a while. It allows to have an
> imprecise, incomplete even uncertain value for a proposition (sort of
> multi-valued attribute with confidence...). I use it to get expert
> opinion on values, it's a known technique for different captor data
> fusion, but in my case, it doesn't demand too much performance as the
> combination is not that important (compared to sensor data fusion) ;)

I don't think that this is a case of premature optimisation, so much as
not considering what kinds of objects you are really working with.  I used
to do a lot of numerical modeling in mathematical biology, and I found
that to get consistent results I had to create a Rational number class in
C++ (linked to FORTRAN via a C wrapper function, which led to horrible
debugging due to C++ name mangling).  It seems to me that the name
Fraction is a bit misleading, as 'fraction' implies division (which was
horrendously slow on the hardware I used to use), whilst 'rational
numbers' are indeed exact (and countable).  I tend to avoid floats unless
I need to use transcendental functions.

If you are looking at something that is divided into parts then rational
numbers will always give better results, and as they can be implemented
purely in Integer arithmetic they are rarely slow unless you are dealing
with weird fractions where the numerator or denominator is greater than
the machine Integer, I guess in Squeak that's if either is greater than 31
bits.

The big question is speed more important than correctness?  I would always
tend to go the correctness route, then look at optimisation as long as the
errors introduced by the floating point processor didn't compromise
correctness beyond acceptable measures.

In population ecology or epidemiological models using rational numbers
avoided the annoying fractional instances (the 0.456 infected creature, or
the 0.23 surviving bilbies for example) that could lead to totally
meaningless results. The worst case I saw was a model of bilby population,
where the model showed a sufficient population density for survival in
relation to predators and resources, but the model actually was super
optimistic due to its having fractional bilbies living close enough
together to breed, analysis showed that the model was out by 2 orders of
magnitude, and if it had been used in a real world situation the cute
little critters would have disappeared from the proposed refuge within 5
years.

```