[squeak-dev] fdlibm 1.0 exp is not the closest approximation of E

Andres Valloud avalloud at smalltalk.comcastbiz.net
Mon Dec 27 22:03:25 UTC 2010


A couple extra things.  Floating point operations that result in IEEE 
special values (INF, NaN) behave different on different platforms.  For 
example, in some platforms you may get INF, and in some other platforms 
you may get NaN.  In some platforms, you may actually get a very large 
value, but not INF.  Testing for conformance is a big pain because you 
have to account for all the disparate behavior.  In addition, Intel 
loosely defines a particular NaN bit configuration to mean 
"indeterminate".  The IEEE standard has no such thing.

Finally, in the HPUX example below, the exponents should read -306, 
-306, and -307.

On 12/27/10 13:58 , Andres Valloud wrote:
> Something I learned is that basically all platforms claim wonderful IEEE
> functionality, but in practice the standard is followed only to a
> certain extent.  In addition, there are wonderful examples of how your
> mileage may vary.  For example...
>
> * The only platform on which I saw sensible arcTan functionality was
> IRIX on RISC.
>
> * On HPUX, doing something like 10d "or a Squeak's float" raisedTo:
> -306, followed by floorLog10, results in 306.  But it fails with 307.
> It works on Windows, Mac, Linux, AIX, IRIX/RISC (when we supported it
> --- the platform is being phased out by Silicon Graphics), etc.  Why?
>
> * Some platforms provide .h files with handy definitions for IEEE
> related functions such as isinf(), isnan(), isfinite() and the like.
> But different versions of the operating system will provide different
> compilation environments.  Consequently, whether you can use such things
> in your code depends on the compilation environment.  And if you upgrade
> the compiler environment but then do that too much, then you don't
> support supported versions of the OS in question.
>
> * x86 FPUs are known to have a somewhat imprecise approximation of pi,
> which in turn affects the trigonometric transcendentals.  This problem
> was fixed back in the early nineties by AMD, but it was rolled back
> because of compatibility concerns.  The 32 bit x86 FPU was never fixed.
>    Supposedly, the 64 bit FPU uses the new pi approximation.  Of course,
> that may mean that you won't get the same floating point results in 32
> and 64 bits.
>
> * Also, on x86, you will get different results if you let the FPU use
> the extended double format that enlarges the mantissa to 64 bits.  So,
> if you keep doing calculations in the FPU, the effects of rounding will
> be different than if you put in some value in the FPU, do the
> calculation, and get the intermediate value out, thus truncating the
> extended double format to the normal double format.  Of course leaving
> the doubles in the FPU is faster, but then you get different results
> unless you twiddle the control bits of the FPU.  Which, OBTW, 64 bit
> Windows defaults to the extended format.
>
> * On the Mac, you get different trigonometric transcendental results
> between 10.4 and 10.6.
>
> And this is just a sample of the wonderful world of IEEE and floating
> point arithmetic.
>
> On 12/27/10 11:09 , Nicolas Cellier wrote:
>> It seems that
>>       1.0 exp = (Float classPool at: #E)
>> is now false with the fdlibm version, at leat on Cog/MacOSX.
>>
>> (Float classPool at: #E) is the closest Float approximation of e,
>> which you can check with:
>> (1.0 asArbitraryPrecisionFloatNumBits: 200) exp asFloat = (Float
>> classPool at: #E)
>>
>> It's bad that fdlibm be able to compute (1.0e32 cos) with less than
>> 1/2 ulp error, but (1.0 exp) with more than 1/2 ulp !
>>
>> Nicolas
>>
>>
>
> .
>



More information about the Squeak-dev mailing list