Avi Bryant wrote:
Seconded. It uses quite a bit more memory than a standard Dictionary, but the performance appears to be impressively flat while growing to very large sizes, even when using the anemic #identityHash.
But you are aware that the current behavior is the result of a very particular set initialization that is easily fixed, yes? Changing the initial set capacity from 3 to 5 will *dramatically* improve the behavior ;-)
Besides, I'd be much more interested in a hashtable/dictionary implementation that preserves ordering (e.g., if an element is added before another it will be enumerated before the other) to preserve consistency in a replicated computation. I'll definitely need to look into this so if anyone has an implementation I'm all ears...
Cheers, - Andreas