[Vm-dev] VM Maker: VMMaker.oscog-cb.2373.mcz
Clément Bera
bera.clement at gmail.com
Thu Apr 26 18:42:56 UTC 2018
Ok I found the solution.
The problem was that SpurMemoryManager>>#ancilliaryClasses: was only
including self compactorClass as ancilliary class. Including (self
compactorClass withAllSuperclasses copyUpThrough: SpurCompactor) reverse
instead of self compactorClass allows to use inheritance. I can now write
once postSwizzleAction in SpurCompactor and I don't have to override
everything everywhere.
Damned working Slang is a relation of love and hate, I love the simulator
but right now I hate it.
On Thu, Apr 26, 2018 at 8:19 PM, Clément Bera <bera.clement at gmail.com>
wrote:
>
>
> On Thu, Apr 26, 2018 at 7:39 PM, Eliot Miranda <eliot.miranda at gmail.com>
> wrote:
>
>>
>>
>> On Thu, Apr 26, 2018 at 9:50 AM, Clément Bera <bera.clement at gmail.com>
>> wrote:
>>
>>> But I don't get it. In SpurManager we have subclasses and we can write
>>> methods in SpurManager that can be used by all the subclasses. Cogit
>>> has multiple subclasses, and we can use any of the subclass re-using the
>>> superclass methods. StackInterpreterPrimitives re-use the methods from the
>>> superclass too. In each case we use one of the subclasses, but we re-use
>>> the superclass methods. Why can't we do that with the compactors? What
>>> makes it harder?
>>>
>>
>> The constraint is that one cannot have multiple implementations of a
>> method not along the same inheritance path (unless marked with
>> <doNotTranslate>; these are just forwarders). So one can use super (but
>> the signatures must match perfectly), but one cannot have an implementation
>> of foo in two different hierarchies, hence one cannot have a compact in
>> SpurPlanningCompactor and in SpurSelectiveCompiler (one can, but these
>> can't be included in the same Slang translation; one has to choose one or
>> the other; look at methods in CCodeTranslator; it is just a dictionary from
>> selector to TMethod). Unless, that is, we engineer something like I
>> described below.
>>
>
> I always use a single compactor at runtime.
> I do not want to have multiple implementations of a method along the same
> inheritance path.
> I do not want an implementation in 2 different hierachy at runtime.
>
> In SpurMemoryManager, there is SpurMemoryManager>>globalGarbageCollect.
> Spur32BitMemoryManager and Spur64BitMemoryManager do not override this
> method.
> When you select Spur32BitMemoryManager or Spur64BitMemoryManager as the
> memory manager, Slang compiler generate C code for globalGarbageCollect.
>
> Now If I write:
> SpurCompactor>>postSwizzleAction
> "do nothing"
> and I do not override postSwizzleAction in SpurPlanningCompactor, when I
> compile the VM selecting SpurPlanningCompactor as the runtime compactor,
> Slang compiler does not generate C code for postSwizzleAction and C
> compiler linker complains with: 'referenced symbol not implemented _
> postSwizzleAction".
>
> So right now in the compactor classes I have to duplicate the method this
> way:
> SpurSweeper>>postSwizzleAction
> "do nothing"
> SpurPlanningCompactor>>postSwizzleAction
> "do nothing"
> SpurPigCompactor>>postSwizzleAction
> "do nothing"
> Since only SpurSelectiveCompactor overrides it.
> And I have to do that with many methods.
>
> I don't understand why I cannot write code in SpurCompactor like it is
> done in SpurMemoryManager.
>
>
>>
>>> I agree with your refactoring, right now I have multiple deadlines until
>>> May 18th, following deadlines are mid June, so I may have a few spare days
>>> and of May for pair programming.
>>>
>>>
>>> On Thu, Apr 26, 2018 at 6:39 PM, Eliot Miranda <eliot.miranda at gmail.com>
>>> wrote:
>>>
>>>> Hi Clément,
>>>>
>>>> On Thu, Apr 26, 2018 at 8:51 AM, Clément Bera <bera.clement at gmail.com>
>>>> wrote:
>>>>
>>>>> I fear there is a misunderstanding, I was not trying to have 2
>>>>> compactors. I was just trying to have SelectiveCompactor subclassing
>>>>> Sweeper to reuse some methods instwad of duplicating them since
>>>>> SelectiveCompactor includes a sweep phase for non compacted segments and to
>>>>> compute segment occupation. That was not possible because some methods were
>>>>> not generated to C because they were in the superclass. I cannot even write
>>>>> a method in the common superclass SpurCompactor (for example defaulting
>>>>> postSwizzleAction to do nothing for all compactors but Selective) but I
>>>>> have to duplicate that method in all subclasses...
>>>>>
>>>>
>>>> There is no misunderstanding. I understand what you've written and I
>>>> get why you want it. But I think we need two compactors, one for the two
>>>> cases of programmer-initiated stop-the-world garbage collect (e.g. from the
>>>> screen menu) and snapshot, and one for the case of the incremental
>>>> mark-sweep GC (and our current implitic full GC when the scavenger detects
>>>> that the heap has grown by the heapGrowthToSizeGCRatio since the last GC).
>>>>
>>>>
>>>>> Currently at slang to c compilation time and in the simulator I change
>>>>> the compactor class setting to use different compactors. Having 2
>>>>> compactors is very interesting, but I've dropped that idea for now since it
>>>>> is not that easy to implement. I wrote SelectiveCompactor as a research
>>>>> experiment, but since results look good (selective compaction time is 1/4th
>>>>> on average the compaction time of planning), we can for sure consider at
>>>>> some point to use it for real. Not sure if it is top priority though.
>>>>>
>>>>
>>>> It's tedious to rewrite the selectors so they don't conflict. But
>>>> adding another variable to add an incrementalCompactor isn't that hard.
>>>>
>>>>
>>>>>
>>>>> I need to build yet another compactor for my research which simulates
>>>>> G1 approach (remembered set per segment for inter segment references) to
>>>>> compare selective to it... I might not commit that one since it may pollute
>>>>> the code base with additionnal write barriers (I will see based on what I
>>>>> produce).
>>>>>
>>>>
>>>> I hope you do. The additional write barriers can be disabled in the
>>>> other GCs can't they?
>>>>
>>>>
>>>>>
>>>>> I feel the infrastructure is quite good right now for research
>>>>> evaluations on compactors. With the simulator it took me only 3 full days
>>>>> to implement SelectiveCompactor and 1 day for Sweeper. My current position
>>>>> is a research position and I am trying to spend part of my time using the
>>>>> Cog as a research framework. We will see how it works out.
>>>>>
>>>>
>>>> Nice!
>>>>
>>>> There is a lot to discuss on restructuring the Slang output and/or
>>>> bootstrapping the VM to be a pure Smalltalk VM. But this is expensive
>>>> work, and it should be discussed in a different thread. But, just briefly,
>>>> I wonder whether we could effect an inexpensive change if we have Slang
>>>> - identify the interfaces between the main classes/inst vars
>>>> (coInterpreter, cogit, backEnd, methodZone, objectMemory, scavenger,
>>>> compactor, segmentManager)
>>>> - distinguish between "trivial sends" that should still be inlined, and
>>>> "major" sends that would remain in the interface
>>>> - map each of these objects to a simple struct and function-pointers C
>>>> object representation (still statically, but allowing for limited
>>>> polymorphism, because those that are polymorphic (say compactor) could be
>>>> pointers-to-structs, not structs)
>>>> Then we could
>>>> - switch between compactors as you desire
>>>> - in Sista switch between a baseline JIT for unoptimized methods and a
>>>> more aggressive JIT for optimized methods
>>>>
>>>> I estimate this might take someone two to eight weeks, and a pair one
>>>> to four weeks. I'd love to pair doing it.
>>>>
>>>> On Thu, Apr 26, 2018 at 4:46 PM, Eliot Miranda <eliot.miranda at gmail.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Apr 26, 2018, at 7:41 AM, Eliot Miranda <eliot.miranda at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> Hi Clément,
>>>>>>
>>>>>> On Apr 26, 2018, at 5:34 AM, Clément Bera <bera.clement at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>> With this micro-bench:
>>>>>> "alloc 320Mb"
>>>>>> keeper1 := Array new: 9.
>>>>>> keeper2 := Array new: 101.
>>>>>> workspaceLocal := Array new: 80.
>>>>>> 1 to: 80 do: [:i | | a |
>>>>>> workspaceLocal at: i put: (a := Array new: 1000).
>>>>>> keeper1 at: i // 10 + 1 put: a.
>>>>>> 1 to: 1000 do: [ :j | | w |
>>>>>> a at: j put: (w := WordArray new: 1000).
>>>>>> keeper2 at: i // 10 + 1 put: w ]].
>>>>>> workspaceLocal := nil.
>>>>>>
>>>>>> Smalltalk garbageCollect.
>>>>>> Smalltalk garbageCollect.
>>>>>> Smalltalk garbageCollect.
>>>>>>
>>>>>> {Smalltalk vmParameterAt: 8.
>>>>>> Smalltalk vmParameterAt: 18}
>>>>>>
>>>>>> Time spent in full GC is:
>>>>>> - planning compactor ~800ms
>>>>>> - selective compactor ~450ms
>>>>>>
>>>>>> Time spent in compaction:
>>>>>> - planning compactor ~450ms
>>>>>> - selective compactor ~125ms
>>>>>>
>>>>>> Obviously this is a GC stress micro-benchmark which does *not* prove
>>>>>> much, it's likely one of the case where both compactors are the most
>>>>>> different, and there are other things to consider (Selective compactor
>>>>>> waste more memory since it takes him more full GC to shrink allocated
>>>>>> segments for example).
>>>>>>
>>>>>> On larger benchmarks it seems compaction time is higher than expected
>>>>>> though, Selective compactor may be relevant even prior to incremental
>>>>>> marking. But if we really want to go that way we need to figure out
>>>>>> something for snapshots...
>>>>>>
>>>>>>
>>>>>> I think we need both. I know that's difficult, but a stop-the-world
>>>>>> GC and a snapshot need something like SpurPlanningCompactor which
>>>>>> - compacts all of memory completely (as much as possible given
>>>>>> potential pinned objects)
>>>>>> - is reasonably efficient (compared to SpurPigCompactor, which lived
>>>>>> up to its name)
>>>>>> But the incremental collector needs incremental compaction and SpurSelectiveCompactor
>>>>>> does that (& it's really exciting; thank you!).
>>>>>>
>>>>>> Given that Slang does translation to C with no objects in the target
>>>>>> C, it means writing things clumsily, or it means engineering some object
>>>>>> support in Slang. For me, KISS implies living with the Slang limitation
>>>>>> for now. So add an incrementalCollector inst var and come up with a static
>>>>>> renaming to avoid clashes.
>>>>>>
>>>>>> If and when we reengineer to bootstrap the vm properly we can revisit
>>>>>> this, but right now I think the restriction is in the nature of the beast.
>>>>>>
>>>>>>
>>>>>> And din e we probably do want to be able to use
>>>>>> SpurSelectiveCompactor when GC'ing in response to growth after scavenge
>>>>>> (i.e. implicitly) I would add a flag to SpurPlanningCompactor and
>>>>>> modify its compact method to test the flag and
>>>>>> invoke SpurSelectiveCompactor if set. If SpurSelectiveCompactor>>#compact
>>>>>> is renamed to e.g. compactLeastDenseSegment then this is straight-forward.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 26, 2018 at 1:00 PM, Clément Bera <bera.clement at gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> For the curious folks, here's a description of
>>>>>>> SpurSelectiveCompactor:
>>>>>>>
>>>>>>> SpurSelectiveCompactor compacts memory by selecting the memory
>>>>>>> segments with the most free space and compacting only those, to limit
>>>>>>> fragmentation while being really quick to perform. The algorithm is fast
>>>>>>> mostly because it does not update pointers: they are updated lazily during
>>>>>>> the next marking phase, so there is no need to read the fields of objects
>>>>>>> in other memory segments that the one compacted.
>>>>>>>
>>>>>>> The algorithm works as follow. First, a global sweep pass iterates
>>>>>>> over the memory linearly, changing unmarked objects to free chunks and
>>>>>>> concatenating free chunks. During the global sweep phase, the segments of
>>>>>>> the heap are analysed to determine the percentage of occupation. Second,
>>>>>>> the least occupied segments are compacted by copying the remaining live
>>>>>>> objects into an entirely free segment, called regionToFill (we detail later
>>>>>>> in the paragraph where regionToFill comes from), changing their values to
>>>>>>> forwarding objects and marking the free chunks as unavailable (removed from
>>>>>>> free list and marked as data objects). Third, the next marking phase
>>>>>>> removes all forwarders. Fourth, at the beginning of the next compaction
>>>>>>> phase the compacted segments from the previous GC can be entirely marked as
>>>>>>> free space (No need to check anything inside, there were only forwarders
>>>>>>> and trash data). One of the compacted segment is then selected as the
>>>>>>> segmentToFill, others are just marked as free chunks.
>>>>>>>
>>>>>>> The compaction is effectively partial, compacting only the most
>>>>>>> critical segments of the heap to limit fragmentation. Compaction time is
>>>>>>> crazy low, since a low number of objects are moved and pointer updated is
>>>>>>> lazily done during the next marking phase, while still preventing memory
>>>>>>> fragmentation.
>>>>>>>
>>>>>>> Although it's relevant research-wise, we don't need
>>>>>>> SelectiveCompactor in the short term. Full GC pause time is currently due
>>>>>>> to the Stop the World Mark and Compact algorithm. Mark pause is longer and
>>>>>>> we need to implement a tri-color incremental marking algorithm to solve
>>>>>>> this problem. Once done, compaction time becomes the biggest pause, which
>>>>>>> can still be a problem. SelectiveCompactor is a solution to decrease the
>>>>>>> compaction pause (SelectiveCompaction effectively does a sweep, which is
>>>>>>> very fast, and partial compaction without pointer update). The runtime may
>>>>>>> be a little slowed down due to the presence of more forwarders. Marking
>>>>>>> time is a little bit longer since it needs to remove more forwarders,
>>>>>>> though, as it is incremental there should not be longer pauses. Overall,
>>>>>>> the throughput might be a little lower (to confirm with benchmarks), but
>>>>>>> pauses are definitely smaller.
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Apr 26, 2018 at 11:12 AM, <commits at source.squeak.org> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> ClementBera uploaded a new version of VMMaker to project VM Maker:
>>>>>>>> http://source.squeak.org/VMMaker/VMMaker.oscog-cb.2373.mcz
>>>>>>>>
>>>>>>>> ==================== Summary ====================
>>>>>>>>
>>>>>>>> Name: VMMaker.oscog-cb.2373
>>>>>>>> Author: cb
>>>>>>>> Time: 26 April 2018, 11:11:47.778949 am
>>>>>>>> UUID: 9b389323-2181-4503-a361-d66ad87fa2de
>>>>>>>> Ancestors: VMMaker.oscog-cb.2372
>>>>>>>>
>>>>>>>> Remove the APIs I added to iterate over free chunks (there was an
>>>>>>>> existing API)
>>>>>>>>
>>>>>>>> Added assertValidFreeObject: to avoid stepping all the time in
>>>>>>>> isValidFreeObject to know what's wrong. Obviously this new method cannot be
>>>>>>>> used in the C code or we will have code in assertion-free VM, leading to
>>>>>>>> the following pattern:
>>>>>>>> self "Sorry stepping over isValidFreeObject all the time was
>>>>>>>> killing me"
>>>>>>>> cCode: [self assert: (self
>>>>>>>> isValidFreeObject: child)]
>>>>>>>> inSmalltalk: [self assertValidFreeObject:
>>>>>>>> child].
>>>>>>>>
>>>>>>>> Since I now use forwarders in fullGC, adapted heap space integrity
>>>>>>>> check.
>>>>>>>>
>>>>>>>> I was a little bit too aggressive in assertion in detachFreeObject:
>>>>>>>> in last commit, reverted that.
>>>>>>>>
>>>>>>>> And SpurSelectiveCompactor is now working as an alternative
>>>>>>>> compactor to Planning, Pig compactors and Sweeper! So exciting. Still needs
>>>>>>>> some tuning for production use (Mostly snapshots consume high memory).
>>>>>>>> SpurSelectiveCompactor compaction time is crazy low (almost as fast as a
>>>>>>>> Sweep algorithm).
>>>>>>>>
>>>>>>>> I may write yet another compactor since I need to compare
>>>>>>>> SelectiveCompactor with Garbage First multi-remembered table approach for
>>>>>>>> research purpose...
>>>>>>>>
>>>>>>>> =============== Diff against VMMaker.oscog-cb.2372 ===============
>>>>>>>>
>>>>>>>> Item was removed:
>>>>>>>> - ----- Method: SpurMemoryManager>>allOldSpaceFreeChunksDo: (in
>>>>>>>> category 'object enumeration') -----
>>>>>>>> - allOldSpaceFreeChunksDo: aBlock
>>>>>>>> - <inline: true>
>>>>>>>> - <doNotGenerate> "Could be generated, but used for debug
>>>>>>>> only"
>>>>>>>> - self allOldSpaceFreeChunksFrom: self firstObject do: aBlock!
>>>>>>>>
>>>>>>>> Item was removed:
>>>>>>>> - ----- Method: SpurMemoryManager>>allOldSpaceFreeChunksFrom:do:
>>>>>>>> (in category 'object enumeration') -----
>>>>>>>> - allOldSpaceFreeChunksFrom: initialObject do: aBlock
>>>>>>>> - <inline: true>
>>>>>>>> - <doNotGenerate> "Could be generated, but used for debug
>>>>>>>> only"
>>>>>>>> - self allOldSpaceEntitiesFrom: initialObject
>>>>>>>> - do: [:objOop|
>>>>>>>> - (self isFreeObject: objOop) ifTrue:
>>>>>>>> - [aBlock value: objOop]]!
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurMemoryManager>>allocateOld
>>>>>>>> SpaceChunkOfExactlyBytes:suchThat: (in category 'free space') -----
>>>>>>>> allocateOldSpaceChunkOfExactlyBytes: chunkBytes suchThat:
>>>>>>>> acceptanceBlock
>>>>>>>> "Answer a chunk of oldSpace from the free lists that
>>>>>>>> satisfies acceptanceBlock,
>>>>>>>> if one of this size is available, otherwise answer nil.
>>>>>>>> N.B. the chunk is simply a
>>>>>>>> pointer, it has no valid header. The caller *must* fill
>>>>>>>> in the header correctly."
>>>>>>>> <var: #chunkBytes type: #usqInt>
>>>>>>>> | index node next prev child childBytes |
>>>>>>>> <inline: true> "must inline for acceptanceBlock"
>>>>>>>> "for debugging:" "totalFreeOldSpace := self
>>>>>>>> totalFreeListBytes"
>>>>>>>>
>>>>>>>> index := chunkBytes / self allocationUnit.
>>>>>>>> index < self numFreeLists ifTrue:
>>>>>>>> [(freeListsMask anyMask: 1 << index) ifTrue:
>>>>>>>> [(node := freeLists at: index) = 0
>>>>>>>> ifTrue: [freeListsMask :=
>>>>>>>> freeListsMask - (1 << index)]
>>>>>>>> ifFalse:
>>>>>>>> [prev := 0.
>>>>>>>> [node ~= 0] whileTrue:
>>>>>>>> [self assert: node
>>>>>>>> = (self startOfObject: node).
>>>>>>>> self assert: (self
>>>>>>>> isValidFreeObject: node).
>>>>>>>> next := self
>>>>>>>> fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
>>>>>>>> (acceptanceBlock
>>>>>>>> value: node) ifTrue:
>>>>>>>> [prev = 0
>>>>>>>>
>>>>>>>> ifTrue: [freeLists at: index put: next]
>>>>>>>>
>>>>>>>> ifFalse: [self storePointer: self freeChunkNextIndex ofFreeChunk: prev
>>>>>>>> withValue: next].
>>>>>>>>
>>>>>>>> totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
>>>>>>>> ^node].
>>>>>>>> prev := node.
>>>>>>>> node := next]]].
>>>>>>>> ^nil].
>>>>>>>>
>>>>>>>> "Large chunk. Search the large chunk list.
>>>>>>>> Large chunk list organized as a tree, each node of which
>>>>>>>> is a list of
>>>>>>>> chunks of the same size. Beneath the node are smaller and
>>>>>>>> larger
>>>>>>>> blocks. When the search ends parent should hold the first
>>>>>>>> chunk of
>>>>>>>> the same size as chunkBytes, or 0 if none."
>>>>>>>> node := 0.
>>>>>>>> child := freeLists at: 0.
>>>>>>>> [child ~= 0] whileTrue:
>>>>>>>> + [self "Sorry stepping over isValidFreeObject all
>>>>>>>> the time was killing me"
>>>>>>>> + cCode: [self assert: (self
>>>>>>>> isValidFreeObject: child)]
>>>>>>>> + inSmalltalk: [self assertValidFreeObject:
>>>>>>>> child].
>>>>>>>> - [self assert: (self isValidFreeObject: child).
>>>>>>>> childBytes := self bytesInObject: child.
>>>>>>>> childBytes = chunkBytes
>>>>>>>> ifTrue: "size match; try to remove from
>>>>>>>> list at node first."
>>>>>>>> [node := child.
>>>>>>>> [prev := node.
>>>>>>>> node := self fetchPointer: self
>>>>>>>> freeChunkNextIndex ofFreeChunk: node.
>>>>>>>> node ~= 0] whileTrue:
>>>>>>>> [(acceptanceBlock value:
>>>>>>>> node) ifTrue:
>>>>>>>> [self assert: (self
>>>>>>>> isValidFreeObject: node).
>>>>>>>> self storePointer:
>>>>>>>> self freeChunkNextIndex
>>>>>>>>
>>>>>>>> ofFreeChunk: prev
>>>>>>>> withValue:
>>>>>>>> (self fetchPointer: self freeChunkNextIndex ofFreeChunk: node).
>>>>>>>> totalFreeOldSpace
>>>>>>>> := totalFreeOldSpace - chunkBytes.
>>>>>>>> ^self
>>>>>>>> startOfObject: node]].
>>>>>>>> (acceptanceBlock value: child)
>>>>>>>> ifFalse:
>>>>>>>> [^nil]. "node was right
>>>>>>>> size but unaceptable."
>>>>>>>> next := self fetchPointer: self
>>>>>>>> freeChunkNextIndex ofFreeChunk: child.
>>>>>>>> next = 0
>>>>>>>> ifTrue: "no list; remove
>>>>>>>> the interior node"
>>>>>>>> [self
>>>>>>>> unlinkSolitaryFreeTreeNode: child]
>>>>>>>> ifFalse: "list; replace
>>>>>>>> node with it"
>>>>>>>> [self
>>>>>>>> inFreeTreeReplace: child with: next].
>>>>>>>> totalFreeOldSpace :=
>>>>>>>> totalFreeOldSpace - chunkBytes.
>>>>>>>> ^self startOfObject: child]
>>>>>>>> ifFalse: "no size match; walk down the tree"
>>>>>>>> [child := self fetchPointer:
>>>>>>>> (childBytes < chunkBytes
>>>>>>>>
>>>>>>>> ifTrue: [self freeChunkLargerIndex]
>>>>>>>>
>>>>>>>> ifFalse: [self freeChunkSmallerIndex])
>>>>>>>>
>>>>>>>> ofFreeChunk: child]].
>>>>>>>> ^nil!
>>>>>>>>
>>>>>>>> Item was added:
>>>>>>>> + ----- Method: SpurMemoryManager>>assertValidFreeObject: (in
>>>>>>>> category 'free space') -----
>>>>>>>> + assertValidFreeObject: objOop
>>>>>>>> + <doNotGenerate> "If you want to generate this you want to
>>>>>>>> use 'self assert: (self isValidFreeObject: objOop)' instead not to generate
>>>>>>>> code in assertion-free VMs"
>>>>>>>> + | chunk |
>>>>>>>> + "duplicated assertions from isValidFreeObject: because I
>>>>>>>> need to know what is wrong not only that it is not valid (I got bored of
>>>>>>>> stepping inside isValidFreeObject:...)"
>>>>>>>> + self assert: (self oop: (self addressAfter: objOop)
>>>>>>>> isLessThanOrEqualTo: endOfMemory).
>>>>>>>> + chunk := self fetchPointer: self freeChunkNextIndex
>>>>>>>> ofFreeChunk: objOop.
>>>>>>>> + self assert: (chunk = 0 or: [self isFreeOop: chunk]).
>>>>>>>> + (self isLargeFreeObject: objOop) ifTrue: [
>>>>>>>> + "Tree assertions"
>>>>>>>> + chunk := self fetchPointer: self
>>>>>>>> freeChunkParentIndex ofFreeChunk: objOop.
>>>>>>>> + self assert: (chunk = 0 or: [(self isFreeOop:
>>>>>>>> chunk) and: [self isLargeFreeObject: chunk]]).
>>>>>>>> + chunk := self fetchPointer: self
>>>>>>>> freeChunkSmallerIndex ofFreeChunk: objOop.
>>>>>>>> + self assert: (chunk = 0 or: [(self isFreeOop:
>>>>>>>> chunk) and: [self isLargeFreeObject: chunk]]).
>>>>>>>> + chunk := self fetchPointer: self
>>>>>>>> freeChunkLargerIndex ofFreeChunk: objOop.
>>>>>>>> + self assert: (chunk = 0 or: [(self isFreeOop:
>>>>>>>> chunk) and: [self isLargeFreeObject: chunk]]). ]!
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurMemoryManager>>checkHeapFreeSpaceIntegrity (in
>>>>>>>> category 'debug support') -----
>>>>>>>> checkHeapFreeSpaceIntegrity
>>>>>>>> "Perform an integrity/leak check using the heapMap. Assume
>>>>>>>> clearLeakMapAndMapAccessibleFreeSpace
>>>>>>>> has set a bit at each free chunk's header. Scan all
>>>>>>>> objects in the heap checking that no pointer points
>>>>>>>> to a free chunk and that all free chunks that refer to
>>>>>>>> others refer to marked chunks. Answer if all checks pass."
>>>>>>>> | ok total |
>>>>>>>> <inline: false>
>>>>>>>> <var: 'total' type: #usqInt>
>>>>>>>> ok := true.
>>>>>>>> total := 0.
>>>>>>>> 0 to: self numFreeLists - 1 do:
>>>>>>>> [:i|
>>>>>>>> (freeLists at: i) ~= 0 ifTrue:
>>>>>>>> [(heapMap heapMapAtWord: (self
>>>>>>>> pointerForOop: (freeLists at: i))) = 0 ifTrue:
>>>>>>>> [coInterpreter print: 'leak in free
>>>>>>>> list '; printNum: i; print: ' to non-free '; printHex: (freeLists at: i);
>>>>>>>> cr.
>>>>>>>> self eek.
>>>>>>>> ok := false]]].
>>>>>>>>
>>>>>>>> "Excuse the duplication but performance is at a premium and
>>>>>>>> we avoid
>>>>>>>> some tests by splitting the newSpace and oldSpace
>>>>>>>> enumerations."
>>>>>>>> self allNewSpaceEntitiesDo:
>>>>>>>> [:obj| | fieldOop |
>>>>>>>> (self isFreeObject: obj)
>>>>>>>> ifTrue:
>>>>>>>> [coInterpreter print: 'young object
>>>>>>>> '; printHex: obj; print: ' is free'; cr.
>>>>>>>> self eek.
>>>>>>>> ok := false]
>>>>>>>> ifFalse:
>>>>>>>> [0 to: (self numPointerSlotsOf:
>>>>>>>> obj) - 1 do:
>>>>>>>> [:fi|
>>>>>>>> fieldOop := self
>>>>>>>> fetchPointer: fi ofObject: obj.
>>>>>>>> (self isNonImmediate:
>>>>>>>> fieldOop) ifTrue:
>>>>>>>> [(heapMap
>>>>>>>> heapMapAtWord: (self pointerForOop: fieldOop)) ~= 0 ifTrue:
>>>>>>>>
>>>>>>>> [coInterpreter print: 'object leak in '; printHex: obj; print: ' @ ';
>>>>>>>> printNum: fi; print: ' = '; printHex: fieldOop; print: ' is free'; cr.
>>>>>>>> self eek.
>>>>>>>> ok :=
>>>>>>>> false]]]]].
>>>>>>>> self allOldSpaceEntitiesDo:
>>>>>>>> [:obj| | fieldOop |
>>>>>>>> (self isFreeObject: obj)
>>>>>>>> ifTrue:
>>>>>>>> [(heapMap heapMapAtWord: (self
>>>>>>>> pointerForOop: obj)) = 0 ifTrue:
>>>>>>>> [coInterpreter print: 'leak
>>>>>>>> in free chunk '; printHex: obj; print: ' is unmapped?!! '; cr.
>>>>>>>> self eek.
>>>>>>>> ok := false].
>>>>>>>> fieldOop := self fetchPointer:
>>>>>>>> self freeChunkNextIndex ofFreeChunk: obj.
>>>>>>>> (fieldOop ~= 0
>>>>>>>> and: [(heapMap heapMapAtWord:
>>>>>>>> (self pointerForOop: fieldOop)) = 0]) ifTrue:
>>>>>>>> [coInterpreter print: 'leak
>>>>>>>> in free chunk '; printHex: obj; print: ' @ 0 = '; printHex: fieldOop;
>>>>>>>> print: ' is unmapped'; cr.
>>>>>>>> self eek.
>>>>>>>> ok := false].
>>>>>>>> (self isLargeFreeObject: obj)
>>>>>>>> ifTrue:
>>>>>>>> [self freeChunkParentIndex
>>>>>>>> to: self freeChunkLargerIndex do:
>>>>>>>> [:fi|
>>>>>>>> fieldOop := self
>>>>>>>> fetchPointer: fi ofFreeChunk: obj.
>>>>>>>> (fieldOop ~= 0
>>>>>>>> and: [(heapMap
>>>>>>>> heapMapAtWord: (self pointerForOop: fieldOop)) = 0]) ifTrue:
>>>>>>>>
>>>>>>>> [coInterpreter print: 'leak in free chunk '; printHex: obj; print: ' @ ';
>>>>>>>> printNum: fi; print: ' = '; printHex: fieldOop; print: ' is unmapped'; cr.
>>>>>>>> self eek.
>>>>>>>> ok :=
>>>>>>>> false]]].
>>>>>>>> total := total + (self
>>>>>>>> bytesInObject: obj)]
>>>>>>>> ifFalse:
>>>>>>>> [0 to: (self numPointerSlotsOf:
>>>>>>>> obj) - 1 do:
>>>>>>>> [:fi|
>>>>>>>> + (self isForwarded: obj)
>>>>>>>> + ifTrue:
>>>>>>>> + [self
>>>>>>>> assert: fi = 0. "I'm now trying to use forwarders in GC algorithms..."
>>>>>>>> + fieldOop
>>>>>>>> := self fetchPointer: fi ofMaybeForwardedObject: obj]
>>>>>>>> + ifFalse: "We keep
>>>>>>>> #fetchPointer:ofObject: API here for assertions"
>>>>>>>> + [fieldOop
>>>>>>>> := self fetchPointer: fi ofObject: obj].
>>>>>>>> - fieldOop := self
>>>>>>>> fetchPointer: fi ofObject: obj.
>>>>>>>> (self isNonImmediate:
>>>>>>>> fieldOop) ifTrue:
>>>>>>>> [(heapMap
>>>>>>>> heapMapAtWord: (self pointerForOop: fieldOop)) ~= 0 ifTrue:
>>>>>>>>
>>>>>>>> [coInterpreter print: 'object leak in '; printHex: obj; print: ' @ ';
>>>>>>>> printNum: fi; print: ' = '; printHex: fieldOop; print: ' is free'; cr.
>>>>>>>> self eek.
>>>>>>>> ok :=
>>>>>>>> false]]]]].
>>>>>>>> total ~= totalFreeOldSpace ifTrue:
>>>>>>>> [coInterpreter print: 'incorrect totalFreeOldSpace;
>>>>>>>> expected '; printNum: totalFreeOldSpace; print: ' found '; printNum: total;
>>>>>>>> cr.
>>>>>>>> self eek.
>>>>>>>> ok := false].
>>>>>>>> ^ok!
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurMemoryManager>>detachFreeObject: (in category
>>>>>>>> 'free space') -----
>>>>>>>> detachFreeObject: freeChunk
>>>>>>>> "This is a rare operation, so its efficiency isn't critical.
>>>>>>>> Having a valid prev link for tree nodes would help."
>>>>>>>> <inline: false>
>>>>>>>> | chunkBytes result |
>>>>>>>> chunkBytes := self bytesInObject: freeChunk.
>>>>>>>> result := self allocateOldSpaceChunkOfExactlyBytes:
>>>>>>>> chunkBytes suchThat: [:f| f = freeChunk].
>>>>>>>> self assert: result = (self startOfObject: freeChunk).
>>>>>>>> - "Following is assertion only. Typical problem is that the
>>>>>>>> free structures (tree/list) keep references to detached object somehow"
>>>>>>>> - self cCode: '' inSmalltalk:
>>>>>>>> - [self allOldSpaceFreeChunksDo:
>>>>>>>> - [ :f | self assert: (self
>>>>>>>> isValidFreeObject: f)]].
>>>>>>>> !
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurSelectiveCompactor>>compactSegmentsToCompact
>>>>>>>> (in category 'compaction') -----
>>>>>>>> compactSegmentsToCompact
>>>>>>>> "Forwards all objects in segments to compact and removes
>>>>>>>> their freechunks"
>>>>>>>> | freeStart |
>>>>>>>> freeStart := segmentToFill segStart.
>>>>>>>> +
>>>>>>>> + "Removes initial free chunk in segment to fill... (Segment
>>>>>>>> is entirely free)"
>>>>>>>> + manager detachFreeObject: (manager objectStartingAt:
>>>>>>>> freeStart).
>>>>>>>> +
>>>>>>>> + "Compact each segment to compact..."
>>>>>>>> 0 to: manager numSegments - 1 do:
>>>>>>>> [:i| | segInfo |
>>>>>>>> segInfo := self addressOf: (manager segmentManager
>>>>>>>> segments at: i).
>>>>>>>> (self isSegmentBeingCompacted: segInfo)
>>>>>>>> ifTrue: [freeStart := self compactSegment:
>>>>>>>> segInfo freeStart: freeStart ]].
>>>>>>>>
>>>>>>>> "Final free chunk in segment to fill..."
>>>>>>>> manager
>>>>>>>> addFreeChunkWithBytes: segmentToFill segSize -
>>>>>>>> manager bridgeSize + segmentToFill segStart - freeStart
>>>>>>>> at: freeStart.
>>>>>>>>
>>>>>>>> + "Follow stack zone and caches..."
>>>>>>>> self postForwardingAction
>>>>>>>> !
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurSelectiveCompactor>>findAndSetSegmentToFill
>>>>>>>> (in category 'freeing') -----
>>>>>>>> findAndSetSegmentToFill
>>>>>>>> 0 to: manager numSegments - 1 do:
>>>>>>>> [:i| | segInfo firstEntity |
>>>>>>>> segInfo := self addressOf: (manager segmentManager
>>>>>>>> segments at: i).
>>>>>>>> firstEntity := manager objectStartingAt: segInfo
>>>>>>>> segStart.
>>>>>>>> ((manager isFreeObject: firstEntity) and:
>>>>>>>> [(manager objectAfter: firstEntity limit: manager endOfMemory) = (manager
>>>>>>>> segmentManager bridgeFor: segInfo)])
>>>>>>>> + ifTrue: [segmentToFill := segInfo. ^0]].
>>>>>>>> - ifTrue: [segmentToFill := segInfo. manager
>>>>>>>> detachFreeObject: firstEntity. ^0]].
>>>>>>>> !
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurSelectiveCompactor>>freePastSegmentsAndSetSegmentToFill
>>>>>>>> (in category 'freeing') -----
>>>>>>>> freePastSegmentsAndSetSegmentToFill
>>>>>>>> + "The first segment being claimed met becomes the
>>>>>>>> segmentToFill. The others are just freed."
>>>>>>>> - "The first segment being claimed met becomes the
>>>>>>>> segmentToFill. The others are just freed"
>>>>>>>> segmentToFill := nil.
>>>>>>>> 0 to: manager numSegments - 1 do:
>>>>>>>> [:i| | segInfo |
>>>>>>>> segInfo := self addressOf: (manager segmentManager
>>>>>>>> segments at: i).
>>>>>>>> (self isSegmentBeingCompacted: segInfo)
>>>>>>>> ifTrue:
>>>>>>>> + [self freeSegment: segInfo.
>>>>>>>> + segmentToFill ifNil:
>>>>>>>> [segmentToFill := segInfo]]]!
>>>>>>>> - [segmentToFill
>>>>>>>> - ifNil: [segmentToFill :=
>>>>>>>> segInfo]
>>>>>>>> - ifNotNil: [self
>>>>>>>> freeSegment: segInfo]]]!
>>>>>>>>
>>>>>>>> Item was changed:
>>>>>>>> ----- Method: SpurSelectiveCompactor>>selectiveCompaction (in
>>>>>>>> category 'compaction') -----
>>>>>>>> selectiveCompaction
>>>>>>>> "Figures out which segments to compact and compact them
>>>>>>>> into segmentToFill"
>>>>>>>> | atLeastOneSegmentToCompact |
>>>>>>>> self assertNoSegmentBeingCompacted.
>>>>>>>> atLeastOneSegmentToCompact := self computeSegmentsToCompact.
>>>>>>>> "If no compaction we don't pay forwarding cost (stack scan,
>>>>>>>> cache scan, etc.)
>>>>>>>> and we don't allocate segmentToFill if none available."
>>>>>>>> atLeastOneSegmentToCompact
>>>>>>>> ifTrue:
>>>>>>>> [self assert: segmentToFill ~~ nil.
>>>>>>>> + self compactSegmentsToCompact].
>>>>>>>> - self compactSegmentsToCompact]
>>>>>>>> - ifFalse:
>>>>>>>> - [segmentToFill ifNotNil: [self freeSegment:
>>>>>>>> segmentToFill]].
>>>>>>>> manager checkFreeSpace: GCModeFull.!
>>>>>>>>
>>>>>>>> Item was added:
>>>>>>>> + ----- Method: SpurSelectiveCompactorSimulator>>selectiveCompaction
>>>>>>>> (in category 'compaction') -----
>>>>>>>> + selectiveCompaction
>>>>>>>> + super selectiveCompaction.
>>>>>>>> + manager allFreeObjectsDo: [:objOop | manager
>>>>>>>> assertValidFreeObject: objOop]!
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Clément Béra
>>>>>>> https://clementbera.github.io/
>>>>>>> https://clementbera.wordpress.com/
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Clément Béra
>>>>>> https://clementbera.github.io/
>>>>>> https://clementbera.wordpress.com/
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Clément Béra
>>>>> https://clementbera.github.io/
>>>>> https://clementbera.wordpress.com/
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> _,,,^..^,,,_
>>>> best, Eliot
>>>>
>>>
>>>
>>>
>>> --
>>> Clément Béra
>>> https://clementbera.github.io/
>>> https://clementbera.wordpress.com/
>>>
>>
>>
>>
>> --
>> _,,,^..^,,,_
>> best, Eliot
>>
>
>
>
> --
> Clément Béra
> https://clementbera.github.io/
> https://clementbera.wordpress.com/
>
--
Clément Béra
https://clementbera.github.io/
https://clementbera.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20180426/b3f19ace/attachment-0001.html>
More information about the Vm-dev
mailing list