Odd
Maloney
johnm at wdi.disney.com
Tue Apr 21 20:18:00 UTC 1998
Re:
>Well, surprise: #bitShift: is already implemented as an arithmetic
>bytecode primitive! It appears that the original supposition was wrong
>-- #bitShift: is not slower because it has to first go through the
>standard method lookup; rather, it looks like it is a bit slower than #*
>because the VM implementation of bytecodePrimBitShift just isn't as
>highly optimized as bytecodePrimMultiply. #<< *is* much slower because
>it has to go through the standard method lookup. FYI, here are some
>timings from my machine:
>
>t1 _ Time millisecondsToRun:
> [1 to: 1000000 do: [:i | 1 * 256]].
>t2 _ Time millisecondsToRun:
> [1 to: 1000000 do: [:i | 1 bitShift: 8]].
>t3 _ Time millisecondsToRun:
> [1 to: 1000000 do: [:i | 1 << 8]].
>Array with: t1 with: t2 with: t3.
>
>-> (2633 3234 9083 )
This is all correct. The four arithmetic operations are very
carefully optimized, since they are so heavily used.
Note that Squeak has never claimed to be a high-performance
Smalltalk implementation. We consciously traded performance for
simplicity and portability. Commercial implementations of Smalltalk
typically do a much better job of streamlining all the arithmetic
operations on small integers, but are much harder to re-target
to a new processor architecture. The Jitter is a step towards
higher performance; you need to write a tiny bit of assembly
code to port the Jitter to a new architecture, but you get a
factor of two or so in performance. There may be an interesting
intermediate point a bit farther this spectrum, one that would
require a bit more porting effort than the Jitter, but which yields
dramatically improved performance. I think Ian Piumarta may
be thinking about such things already...
-- John
More information about the Squeak-dev
mailing list
|