On 20 October 2013 15:42, Igor Stasenko email@example.com wrote:
On 22 September 2013 20:29, Eliot Miranda firstname.lastname@example.org wrote:
On Sat, Sep 21, 2013 at 11:39 AM, Levente Uzonyi email@example.com wrote:
Great progress. I've got a few questions about Spur and the bootstrap:
- How will HashedCollections other than MethodDictionaries get rehashed?
How #hash be calculated from #identityHash?
The bootstrap first builds an image in the new format and then finds all classes in it that implement rehash and then uses the simulator to send rehash to all objects to use their own rehash method to rehash them, instead of assuming their format. In fact, the bootstrap does not rehash all objects that understand rehash since Symbol is one of them, and its Symbol table is rehashed anyway.
hash is computed from identityHash without change, but scaledIdentityHash is changed to avoid creating LargeIntegers (the larger identityHash needs less shift also): ProtoObjectPROTOTYPEscaledIdentityHash "For identityHash values returned by primitive 75, answer such values times 2^8. Otherwise, match the existing identityHash implementation"
^self identityHash * 256 "bitShift: 8"
- What will happen to primitive 138 and 139? Will we still be able to
iterate over the objects in the order of their creation time?
That's much more difficult. I'll add allInstances and allObjects primitives which can answer the objects instantianeously without needing to enumerate. One issue is that the scavenger could fire at any time during the enumeration and could tenure objects to old space so its in general not possible to reliably enumerate via firstInstance/nextInstance firstObject/nextObject.
indeed. i think we could breathe more freely if VM wouldn't be obliged to expose heap enumeration/walking primitives.. I just wonder, if it is possible to provide alternatives for most cases:
- finding all instances
- finding all pointers to subject
so the heap walking primitives can R.I.P. .. but there could be more, which we may need and which would require heap walking. what you think?
more for this matter: i think for all kinds of queries , like "gimme all
object meeting certain criteria", there could be other way.
Basically, we need 1 primitive: primitive takes 2 arguments: an object, and array the first argument defines the root object which VM should traverse and answer the (possibly) huge array of all objects it can reach starting from it. the second argument is simply a list of objects which VM should ignore while traversing the graph this gives the opportunity for user to define a filter which can be used during scanning, instead of filtering afterwards.
(as variant, there could be a 3rd argument - an array which will contain the answer, like that if array not big enough to fit all found references, primitive fails, otherwise primitive answers the number of filled indices in that array. this gives opportunity for user to control the size of array and preallocate it before scanning.)
I think, with such primitive, we will no longer need heap walking primitives.
Being able to enumerate in order of creation time is something the VM
could maintain in old space, keeping objects in order there-in. But I'd rather drop this property and allow the Spur GC to compact using best-fit and lazy become, which will change the order of objects. How important is being able to enumerate in order of creation? I think it's nice to have but hardly essential. What do y'all think?
IMO, i would never make any assumptions, in which order objects are located on heap nor in which order they get enumerated. "Heap" word says for itself here. (and anyone who thinks otherwise should be shot on sight :)
- IIUC the memory is more segmented. Will there be new GC primitives for
various levels of garbage collection? What about the GC parameters?
That remains to be seen. I take requests. I take design suggestions. But I also like to keep things simple.
My current approach is to keep things as compatible as possible to ease adoption. Once it is adopted we can start to evolve to provide appropriate and.or improved management facilities.
- Are #become: and #becomeForward: optimized for the case where both
objects use the same amount of memory?
Yes. This case simply swaps contents and adjusts the remembered table accordingly. Tim also suggested optimizing the other case, copying into the smaller object and only allocating one extra clone, which I'll implement soon.
On Sat, 21 Sep 2013, firstname.lastname@example.org wrote:
Eliot Miranda uploaded a new version of VMMaker to project VM Maker: http://source.squeak.org/**VMMaker/VMMaker.oscog-eem.399.**mczhttp://source.squeak.org/VMMaker/VMMaker.oscog-eem.399.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.399 Author: eem Time: 20 September 2013, 6:28:56.308 pm UUID: 89f8fefe-b59d-42d7-9c11-**7f848d0e5131 Ancestors: VMMaker.oscog-eem.398
A few isIntegerObject:'s => isImmediate:'s in primitives.
The Spur VM now draws its first window!!
A cheer goes up in the crowd of interested spectators.
Probably lots still to do, but its a nice concrete milestone. Contributing is beyond me at this time, so I especially like to thank you for this important initiative.
-- best, Eliot
-- Best regards, Igor Stasenko.