leves at caesar.elte.hu
Thu Jun 21 20:51:08 UTC 2018
On Tue, 19 Jun 2018, Eliot Miranda wrote:
> To keep it simple. The scheme really comes from the free lists. There
is either a 32 or a 64 element array with all the free chunks from size 1
to 32 or 1 to 64. So this gives fast allocation for the common case of a
smallish object. Then what to do with element 0? A binary tree works
well since it is populated only with larger chunks and, given that all
chunks of the same size are, effectively, a single node in the tree, the
tree typically isn't that large. That and some very simple balancing
hacks keep it from degenerating into a list whenever I've looked at its
state (although we might investigate organizing it as an AVL tree; I like
AVL trees but they're a lot of work for little gain if in practice a tree
Well, random binary trees have ~1.39 x log(n) average height, so
performance can be really good if the order the chunk sizes are added and
removed do not unbalance the tree too much.
The reason I asked about this was that I couldn't figure out why the trees
were said to be AVL trees. And I would have used a hash table with open
addressing over an AVL tree, because that's fairly easy to implement and
has good performance, while AVL trees take a lot more effort to implement.
> The use of 32 or 64 elements allows the use of a bitmask word to serve
as a non-empty list cache, so that when the system looks for chunks bigger
than needed (if a smaller list is empty) it is fast to check the list(s)
at 2x, 3x, etc...
> A hash table needs heap space to organize and unlike the binary tree
this space may need to grow if the number of free chunks is very large.
Based on your answers, I have got the impression that the performance of
these data structures have negligible effect on the overall GC
performance, so it's probably not worth to try to optimize them.
More information about the Vm-dev