OrderedDictionary (was: Re: [squeak-dev] Re: The Trunk: Collections-mt.593.mcz)

Eliot Miranda eliot.miranda at gmail.com
Fri Jan 16 16:27:53 UTC 2015



On Jan 16, 2015, at 7:53 AM, Bert Freudenberg <bert at freudenbergs.de> wrote:

> 
>> On 16.01.2015, at 16:36, Eliot Miranda <eliot.miranda at gmail.com> wrote:
>> 
>> Hi Marcel,
>> 
>> On Jan 16, 2015, at 5:54 AM, Marcel Taeumel <marcel.taeumel at student.hpi.uni-potsdam.de> wrote:
>> 
>>> Ah, okay. I was curious about the performance of iterating over an array vs.
>>> the linked list:
>>> 
>>> a := Array new: 1000.
>>> l := LinkedList new.
>>> 
>>> 1000 timesRepeat: [l add: (StackLink with: nil)].
>>> 
>>> [a do: [:ea | 1 + 2]] bench. "'94,600 per second.'"
>>> 
>>> [l do: [:ea | 1 + 2]] bench. "'159,000 per second.'"
>>> 
>>> I expected the opposite! :-O Why is that so?
>> 
>> Look at the implementations of do:.
>> 
>> In the Array case we iterate over the indices using SmallIntegers (fast arithmetic with no allocation) and dereference each element using the Object>>at: primitive, which (apart from type and bounds checking) is simple indexed memory access. So one read per element.
>> 
>> In the LinkedList case each list element is fetched from the previous element and each value fetched from the element, so two reads per element.
> 
> Making it all the more surprising that the linked list is twice as fast.

Ugh.  I was blindsided.  I saw the numbers as times not rates.  Marcel, I'm sorry.  OK indeed the difference would be the cost of the bounds check in each array access.  Interesting to see at what point the poorer memory locality of the list makes it worse.  Difficult to measure given the locality is decided by history and the GC.


> 
> My guess would be that the two inst var reads are way more efficient than the at: primitive.
> 
> - Bert -
> 
> 
> 


More information about the Squeak-dev mailing list