On 19 sept. 05, at 09:26, Andreas Raab wrote:
Avi Bryant wrote:
Seconded. It uses quite a bit more memory than a standard Dictionary, but the performance appears to be impressively flat while growing to very large sizes, even when using the anemic #identityHash.
But you are aware that the current behavior is the result of a very particular set initialization that is easily fixed, yes? Changing the initial set capacity from 3 to 5 will *dramatically* improve the behavior ;-)
So send a fix so that we change the default.
Now what I understood is that the degeneration is bad in the current implementation, since collisions generates more chance that other collision will occur. So on large dicts this is dramatic.
Besides, I'd be much more interested in a hashtable/dictionary implementation that preserves ordering (e.g., if an element is added before another it will be enumerated before the other) to preserve consistency in a replicated computation. I'll definitely need to look into this so if anyone has an implementation I'm all ears...
Does the default one preserve ordering? Not that nothing prevent us to have two kind of dictionary with specific behavior.
Stef
Cheers,
- Andreas