[squeak-dev] Re: [ANN] Number comparison, hash, NaN, Point, and other partially ordered sets

Hans-Martin Mosner hmm at heeg.de
Fri Jan 9 08:29:41 UTC 2009


Tim Olson schrieb:
>
> I'd have to play around a bit to get this into a Squeak unit test, but
> here's a test vector which shows the effect:
Thanks for that; I've made it into a unit test (see attachment) but at
the moment I'm unsure about the course of action that we should take.

First of all, does it matter? If I understand correctly, this behavior
is only present for denormalized numbers. Do these appear in real-world
cases? My guess is that an appropriate computation using numbers with
minimum exponent is the only way of getting denormalized numbers in
real-world programs. I'd guess that for example, with repeated
applications of 3D transformation matrixes which would combine to an
identity transormation if their numbers had infinite precision, you
could get into such a situation. Imagine repeated rotations by 10
degrees - after 36 steps you would theoretically have identity, but your
actual transformation will be slightly off. This would be a typical
Croquet case, and therefore I think that the probability of
miscalculation is real.

I've tried to analyze the case in question and came to the following
results:
The exact mantissa after multiplication is 3B16EF930A76E.80002C69F96C2
(the hex digits after the point are those that should be rounded off
when going to a 52-bit mantissa). The result with "A76F" as the last hex
digits would therefore be the correct value for an IEEE-754 double
precision multiplication (rounding to the nearest representable number),
so the PPC implementation does it right.
When doing an extended double precision multiplication, there are some
more bits in the mantissa, and the mantissa of the intermediate result
looks like ...A76E.800 which is exacly in the middle between two
representable numbers. Converting to double precision involves rounding
the mantissa again, and the rounding rule for this case (exact middle)
says to round to the nearest even number, which is ...A76E.

Is it at all possible to get the x86 FPU to produce the correct result
for this situation?
Interestingly, this article
(http://www.vinc17.org/research/extended.en.html) claims that the FPU is
set up for incorrect rounding under Linux only, but I could not
reproduce the test case given there with Squeak (which probably means
that I mistranslated the Java example).
I have not yet tried the unit test under Windows to check whether the
rounding mode is really set differently there.
If the test runs successfully under Windows, there might be a chance of
setting the rounding mode of the FPU for correct double precision
rounding under Linux, too.

Cheers,
Hans-Martin
-------------- next part --------------
'From Squeak3.10.2 of ''5 June 2008'' [latest update: #7179] on 9 January 2009 at 8:28:06 am'!

!FloatTest methodsFor: 'IEEE 754' stamp: 'hmm 1/9/2009 08:18'!
testDenormalResultWithExtendedPrecision
	"The x86 FP hardware uses 80-bit internal representation.
	Example supplied by Tim Olson on the squeak-dev mailing list:
		3ff3208a25e04e87 * 000316dd1d02d1ae

		x86 result:        0003b16ef930a76e
		powerPC result:    0003b16ef930a76f

		These are IEEE double-precision floating-point numbers, specified in hex (big-endian)
		so that they are bit exact (no conversion error from a decimal representation).
		The first number is between 1.0 and 2.0, while the second number is a denormal value.
		The product shows an lsb difference between an x86 platform (extended-precision FPU)
		and a PPC platform (double-precision FPU), even when the x86 floating-point
		control word is set to use double-precision rounding for all results."

	| f1 f2 product |
	f1 := 0.0+0.0.
	f2 := 0.0+0.0.
	f1 basicAt: 1 put: 16r3FF3208A; basicAt: 2 put: 16r25E04E87.
	f2 basicAt: 1 put: 16r000316DD; basicAt: 2 put: 16r1D02D1AE.
	product := f1 * f2.
	self assert: (product basicAt: 1) = 16r0003B16E.
	self assert: (product basicAt: 2) = 16rF930A76F! !


More information about the Squeak-dev mailing list