[Vm-dev] VM Maker: VMMaker.oscog-cb.2373.mcz

Clément Bera bera.clement at gmail.com
Thu Apr 26 15:51:09 UTC 2018


I fear there is a misunderstanding, I was not trying to have 2 compactors.
I was just trying to have SelectiveCompactor subclassing Sweeper to reuse
some methods instwad of duplicating them since SelectiveCompactor includes
a sweep phase for non compacted segments and to compute segment occupation.
That was not possible because some methods were not generated to C because
they were in the superclass. I cannot even write a method in the common
superclass SpurCompactor (for example defaulting postSwizzleAction to do
nothing for all compactors but Selective) but I have to duplicate that
method in all subclasses...

Currently at slang to c compilation time and in the simulator I change the
compactor class setting to use different compactors. Having 2 compactors is
very interesting, but I've dropped that idea for now since it is not that
easy to implement. I wrote SelectiveCompactor as a research experiment, but
since results look good (selective compaction time is 1/4th on average the
compaction time of planning), we can for sure consider at some point to use
it for real. Not sure if it is top priority though.

I need to build yet another compactor for my research which simulates G1
approach (remembered set per segment for inter segment references) to
compare selective to it... I might not commit that one since it may pollute
the code base with additionnal write barriers (I will see based on what I
produce).

I feel the infrastructure is quite good right now for research evaluations
on compactors. With the simulator it took me only 3 full days to implement
SelectiveCompactor and 1 day for Sweeper. My current position is a research
position and I am trying to spend part of my time using the Cog as a
research framework. We will see how it works out.




On Thu, Apr 26, 2018 at 4:46 PM, Eliot Miranda <eliot.miranda at gmail.com>
wrote:

>
>
>
> On Apr 26, 2018, at 7:41 AM, Eliot Miranda <eliot.miranda at gmail.com>
> wrote:
>
> Hi Clément,
>
> On Apr 26, 2018, at 5:34 AM, Clément Bera <bera.clement at gmail.com> wrote:
>
> With this micro-bench:
> "alloc 320Mb"
> keeper1 := Array new: 9.
> keeper2 := Array new: 101.
> workspaceLocal := Array new: 80.
> 1 to: 80 do: [:i | | a |
> workspaceLocal at: i put: (a := Array new: 1000).
> keeper1 at: i // 10 + 1 put: a.
> 1 to: 1000 do: [ :j | | w |
> a at: j put: (w := WordArray new: 1000).
> keeper2 at: i // 10 + 1 put: w ]].
> workspaceLocal := nil.
>
> Smalltalk garbageCollect.
> Smalltalk garbageCollect.
> Smalltalk garbageCollect.
>
> {Smalltalk vmParameterAt: 8.
> Smalltalk vmParameterAt: 18}
>
> Time spent in full GC is:
> - planning compactor ~800ms
> - selective compactor ~450ms
>
> Time spent in compaction:
> - planning compactor ~450ms
> - selective compactor ~125ms
>
> Obviously this is a GC stress micro-benchmark which does *not* prove much,
> it's likely one of the case where both compactors are the most different,
> and there are other things to consider (Selective compactor waste more
> memory since it takes him more full GC to shrink allocated segments for
> example).
>
> On larger benchmarks it seems compaction time is higher than expected
> though, Selective compactor may be relevant even prior to incremental
> marking. But if we really want to go that way we need to figure out
> something for snapshots...
>
>
> I think we need both.  I know that's difficult, but a stop-the-world GC
> and a snapshot need something like SpurPlanningCompactor which
> - compacts all of memory completely (as much as possible given potential
> pinned objects)
> - is reasonably efficient (compared to SpurPigCompactor, which lived up to
> its name)
> But the incremental collector needs incremental compaction and SpurSelectiveCompactor
> does that (& it's really exciting; thank you!).
>
> Given that Slang does translation to C with no objects in the target C, it
> means writing things clumsily, or it means engineering some object support
> in Slang.  For me, KISS implies living with the Slang limitation for now.
> So add an incrementalCollector inst var and come up with a static renaming
> to avoid clashes.
>
> If and when we reengineer to bootstrap the vm properly we can revisit
> this, but right now I think the restriction is in the nature of the beast.
>
>
> And din e we probably do want to be able to use SpurSelectiveCompactor
> when GC'ing in response to growth after scavenge (i.e. implicitly) I would
> add a flag to SpurPlanningCompactor and modify its compact method to test
> the flag and invoke SpurSelectiveCompactor if set.
> If SpurSelectiveCompactor>>#compact is renamed to e.g.
> compactLeastDenseSegment then this is straight-forward.
>
>
>
> On Thu, Apr 26, 2018 at 1:00 PM, Clément Bera <bera.clement at gmail.com>
> wrote:
>
>> For the curious folks, here's a description of SpurSelectiveCompactor:
>>
>> SpurSelectiveCompactor compacts memory by selecting the memory segments
>> with the most free space and compacting only those, to limit fragmentation
>> while being really quick to perform. The algorithm is fast mostly because
>> it does not update pointers: they are updated lazily during the next
>> marking phase, so there is no need to read the fields of objects in other
>> memory segments that the one compacted.
>>
>> The algorithm works as follow. First, a global sweep pass iterates over
>> the memory linearly, changing unmarked objects to free chunks and
>> concatenating free chunks. During the global sweep phase, the segments of
>> the heap are analysed to determine the percentage of occupation. Second,
>> the least occupied segments are compacted by copying the remaining live
>> objects into an entirely free segment, called regionToFill (we detail later
>> in the paragraph where regionToFill comes from), changing their values to
>> forwarding objects and marking the free chunks as unavailable (removed from
>> free list and marked as data objects). Third, the next marking phase
>> removes all forwarders. Fourth, at the beginning of the next compaction
>> phase the compacted segments from the previous GC can be entirely marked as
>> free space (No need to check anything inside, there were only forwarders
>> and trash data). One of the compacted segment is then selected as the
>> segmentToFill, others are just marked as free chunks.
>>
>> The compaction is effectively partial, compacting only the most critical
>> segments of the heap to limit fragmentation. Compaction time is crazy low,
>> since a low number of objects are moved and pointer updated is lazily done
>> during the next marking phase, while still preventing memory fragmentation.
>>
>> Although it's relevant research-wise, we don't need SelectiveCompactor in
>> the short term. Full GC pause time is currently due to the Stop the World
>> Mark and Compact algorithm. Mark pause is longer and we need to implement a
>> tri-color incremental marking algorithm to solve this problem. Once done,
>> compaction time becomes the biggest pause, which can still be a problem.
>> SelectiveCompactor is a solution to decrease the compaction pause
>> (SelectiveCompaction effectively does a sweep, which is very fast, and
>> partial compaction without pointer update). The runtime may be a little
>> slowed down due to the presence of more forwarders. Marking time is a
>> little bit longer since it needs to remove more forwarders, though, as it
>> is incremental there should not be longer pauses. Overall, the throughput
>> might be a little lower (to confirm with benchmarks), but pauses are
>> definitely smaller.
>>
>>
>> On Thu, Apr 26, 2018 at 11:12 AM, <commits at source.squeak.org> wrote:
>>
>>>
>>> ClementBera uploaded a new version of VMMaker to project VM Maker:
>>> http://source.squeak.org/VMMaker/VMMaker.oscog-cb.2373.mcz
>>>
>>> ==================== Summary ====================
>>>
>>> Name: VMMaker.oscog-cb.2373
>>> Author: cb
>>> Time: 26 April 2018, 11:11:47.778949 am
>>> UUID: 9b389323-2181-4503-a361-d66ad87fa2de
>>> Ancestors: VMMaker.oscog-cb.2372
>>>
>>> Remove the APIs I added to iterate over free chunks (there was an
>>> existing API)
>>>
>>> Added assertValidFreeObject: to avoid stepping all the time in
>>> isValidFreeObject to know what's wrong. Obviously this new method cannot be
>>> used in the C code or we will have code in assertion-free VM, leading to
>>> the following pattern:
>>> self "Sorry stepping over isValidFreeObject all the time was killing me"
>>>                         cCode: [self assert: (self isValidFreeObject:
>>> child)]
>>>                         inSmalltalk: [self assertValidFreeObject: child].
>>>
>>> Since I now use forwarders in fullGC, adapted heap space integrity check.
>>>
>>> I was a little bit too aggressive in assertion in detachFreeObject: in
>>> last commit, reverted that.
>>>
>>> And SpurSelectiveCompactor is now working as an alternative compactor to
>>> Planning, Pig compactors and Sweeper! So exciting. Still needs some tuning
>>> for production use (Mostly snapshots consume high memory).
>>> SpurSelectiveCompactor compaction time is crazy low (almost as fast as a
>>> Sweep algorithm).
>>>
>>> I may write yet another compactor since I need to compare
>>> SelectiveCompactor with Garbage First multi-remembered table approach for
>>> research purpose...
>>>
>>> =============== Diff against VMMaker.oscog-cb.2372 ===============
>>>
>>> Item was removed:
>>> - ----- Method: SpurMemoryManager>>allOldSpaceFreeChunksDo: (in
>>> category 'object enumeration') -----
>>> - allOldSpaceFreeChunksDo: aBlock
>>> -       <inline: true>
>>> -       <doNotGenerate> "Could be generated, but used for debug only"
>>> -       self allOldSpaceFreeChunksFrom: self firstObject do: aBlock!
>>>
>>> Item was removed:
>>> - ----- Method: SpurMemoryManager>>allOldSpaceFreeChunksFrom:do: (in
>>> category 'object enumeration') -----
>>> - allOldSpaceFreeChunksFrom: initialObject do: aBlock
>>> -       <inline: true>
>>> -       <doNotGenerate> "Could be generated, but used for debug only"
>>> -       self allOldSpaceEntitiesFrom: initialObject
>>> -               do: [:objOop|
>>> -                        (self isFreeObject: objOop) ifTrue:
>>> -                               [aBlock value: objOop]]!
>>>
>>> Item was changed:
>>>   ----- Method: SpurMemoryManager>>allocateOld
>>> SpaceChunkOfExactlyBytes:suchThat: (in category 'free space') -----
>>>   allocateOldSpaceChunkOfExactlyBytes: chunkBytes suchThat:
>>> acceptanceBlock
>>>         "Answer a chunk of oldSpace from the free lists that satisfies
>>> acceptanceBlock,
>>>          if one of this size is available, otherwise answer nil.  N.B.
>>> the chunk is simply a
>>>          pointer, it has no valid header.  The caller *must* fill in the
>>> header correctly."
>>>         <var: #chunkBytes type: #usqInt>
>>>         | index node next prev child childBytes |
>>>         <inline: true> "must inline for acceptanceBlock"
>>>         "for debugging:" "totalFreeOldSpace := self totalFreeListBytes"
>>>
>>>         index := chunkBytes / self allocationUnit.
>>>         index < self numFreeLists ifTrue:
>>>                 [(freeListsMask anyMask: 1 << index) ifTrue:
>>>                         [(node := freeLists at: index) = 0
>>>                                 ifTrue: [freeListsMask := freeListsMask
>>> - (1 << index)]
>>>                                 ifFalse:
>>>                                         [prev := 0.
>>>                                          [node ~= 0] whileTrue:
>>>                                                 [self assert: node =
>>> (self startOfObject: node).
>>>                                                  self assert: (self
>>> isValidFreeObject: node).
>>>                                                  next := self
>>> fetchPointer: self freeChunkNextIndex ofFreeChunk: node.
>>>                                                  (acceptanceBlock value:
>>> node) ifTrue:
>>>                                                         [prev = 0
>>>                                                                 ifTrue:
>>> [freeLists at: index put: next]
>>>                                                                 ifFalse:
>>> [self storePointer: self freeChunkNextIndex ofFreeChunk: prev withValue:
>>> next].
>>>
>>>  totalFreeOldSpace := totalFreeOldSpace - chunkBytes.
>>>                                                          ^node].
>>>                                                  prev := node.
>>>                                                  node := next]]].
>>>                  ^nil].
>>>
>>>         "Large chunk.  Search the large chunk list.
>>>          Large chunk list organized as a tree, each node of which is a
>>> list of
>>>          chunks of the same size. Beneath the node are smaller and larger
>>>          blocks.  When the search ends parent should hold the first
>>> chunk of
>>>          the same size as chunkBytes, or 0 if none."
>>>         node := 0.
>>>         child := freeLists at: 0.
>>>         [child ~= 0] whileTrue:
>>> +               [self "Sorry stepping over isValidFreeObject all the
>>> time was killing me"
>>> +                       cCode: [self assert: (self isValidFreeObject:
>>> child)]
>>> +                       inSmalltalk: [self assertValidFreeObject: child].
>>> -               [self assert: (self isValidFreeObject: child).
>>>                  childBytes := self bytesInObject: child.
>>>                  childBytes = chunkBytes
>>>                         ifTrue: "size match; try to remove from list at
>>> node first."
>>>                                 [node := child.
>>>                                  [prev := node.
>>>                                   node := self fetchPointer: self
>>> freeChunkNextIndex ofFreeChunk: node.
>>>                                   node ~= 0] whileTrue:
>>>                                         [(acceptanceBlock value: node)
>>> ifTrue:
>>>                                                 [self assert: (self
>>> isValidFreeObject: node).
>>>                                                  self storePointer: self
>>> freeChunkNextIndex
>>>                                                         ofFreeChunk: prev
>>>                                                         withValue: (self
>>> fetchPointer: self freeChunkNextIndex ofFreeChunk: node).
>>>                                                  totalFreeOldSpace :=
>>> totalFreeOldSpace - chunkBytes.
>>>                                                  ^self startOfObject:
>>> node]].
>>>                                  (acceptanceBlock value: child) ifFalse:
>>>                                         [^nil]. "node was right size but
>>> unaceptable."
>>>                                  next := self fetchPointer: self
>>> freeChunkNextIndex ofFreeChunk: child.
>>>                                  next = 0
>>>                                         ifTrue: "no list; remove the
>>> interior node"
>>>                                                 [self
>>> unlinkSolitaryFreeTreeNode: child]
>>>                                         ifFalse: "list; replace node
>>> with it"
>>>                                                 [self inFreeTreeReplace:
>>> child with: next].
>>>                                  totalFreeOldSpace := totalFreeOldSpace
>>> - chunkBytes.
>>>                                  ^self startOfObject: child]
>>>                         ifFalse: "no size match; walk down the tree"
>>>                                 [child := self fetchPointer: (childBytes
>>> < chunkBytes
>>>
>>>                         ifTrue: [self freeChunkLargerIndex]
>>>
>>>                         ifFalse: [self freeChunkSmallerIndex])
>>>                                                         ofFreeChunk:
>>> child]].
>>>         ^nil!
>>>
>>> Item was added:
>>> + ----- Method: SpurMemoryManager>>assertValidFreeObject: (in category
>>> 'free space') -----
>>> + assertValidFreeObject: objOop
>>> +       <doNotGenerate> "If you want to generate this you want to use
>>> 'self assert: (self isValidFreeObject: objOop)' instead not to generate
>>> code in assertion-free VMs"
>>> +       | chunk |
>>> +       "duplicated assertions from isValidFreeObject: because I need to
>>> know what is wrong not only that it is not valid (I got bored of stepping
>>> inside isValidFreeObject:...)"
>>> +       self assert: (self oop: (self addressAfter: objOop)
>>> isLessThanOrEqualTo: endOfMemory).
>>> +       chunk := self fetchPointer: self freeChunkNextIndex ofFreeChunk:
>>> objOop.
>>> +       self assert: (chunk = 0 or: [self isFreeOop: chunk]).
>>> +       (self isLargeFreeObject: objOop) ifTrue: [
>>> +               "Tree assertions"
>>> +               chunk := self fetchPointer: self freeChunkParentIndex
>>> ofFreeChunk: objOop.
>>> +               self assert: (chunk = 0 or: [(self isFreeOop: chunk)
>>> and: [self isLargeFreeObject: chunk]]).
>>> +               chunk := self fetchPointer: self freeChunkSmallerIndex
>>> ofFreeChunk: objOop.
>>> +               self assert: (chunk = 0 or: [(self isFreeOop: chunk)
>>> and: [self isLargeFreeObject: chunk]]).
>>> +               chunk := self fetchPointer: self freeChunkLargerIndex
>>> ofFreeChunk: objOop.
>>> +               self assert: (chunk = 0 or: [(self isFreeOop: chunk)
>>> and: [self isLargeFreeObject: chunk]]). ]!
>>>
>>> Item was changed:
>>>   ----- Method: SpurMemoryManager>>checkHeapFreeSpaceIntegrity (in
>>> category 'debug support') -----
>>>   checkHeapFreeSpaceIntegrity
>>>         "Perform an integrity/leak check using the heapMap.  Assume
>>> clearLeakMapAndMapAccessibleFreeSpace
>>>          has set a bit at each free chunk's header.  Scan all objects in
>>> the heap checking that no pointer points
>>>          to a free chunk and that all free chunks that refer to others
>>> refer to marked chunks.  Answer if all checks pass."
>>>         | ok total |
>>>         <inline: false>
>>>         <var: 'total' type: #usqInt>
>>>         ok := true.
>>>         total := 0.
>>>         0 to: self numFreeLists - 1 do:
>>>                 [:i|
>>>                 (freeLists at: i) ~= 0 ifTrue:
>>>                         [(heapMap heapMapAtWord: (self pointerForOop:
>>> (freeLists at: i))) = 0 ifTrue:
>>>                                 [coInterpreter print: 'leak in free list
>>> '; printNum: i; print: ' to non-free '; printHex: (freeLists at: i); cr.
>>>                                  self eek.
>>>                                  ok := false]]].
>>>
>>>         "Excuse the duplication but performance is at a premium and we
>>> avoid
>>>          some tests by splitting the newSpace and oldSpace enumerations."
>>>         self allNewSpaceEntitiesDo:
>>>                 [:obj| | fieldOop |
>>>                  (self isFreeObject: obj)
>>>                         ifTrue:
>>>                                 [coInterpreter print: 'young object ';
>>> printHex: obj; print: ' is free'; cr.
>>>                                  self eek.
>>>                                  ok := false]
>>>                         ifFalse:
>>>                                 [0 to: (self numPointerSlotsOf: obj) - 1
>>> do:
>>>                                         [:fi|
>>>                                          fieldOop := self fetchPointer:
>>> fi ofObject: obj.
>>>                                          (self isNonImmediate: fieldOop)
>>> ifTrue:
>>>                                                 [(heapMap heapMapAtWord:
>>> (self pointerForOop: fieldOop)) ~= 0 ifTrue:
>>>                                                         [coInterpreter
>>> print: 'object leak in '; printHex: obj; print: ' @ '; printNum: fi; print:
>>> ' = '; printHex: fieldOop; print: ' is free'; cr.
>>>                                                          self eek.
>>>                                                          ok :=
>>> false]]]]].
>>>         self allOldSpaceEntitiesDo:
>>>                 [:obj| | fieldOop |
>>>                 (self isFreeObject: obj)
>>>                         ifTrue:
>>>                                 [(heapMap heapMapAtWord: (self
>>> pointerForOop: obj)) = 0 ifTrue:
>>>                                         [coInterpreter print: 'leak in
>>> free chunk '; printHex: obj; print: ' is unmapped?!! '; cr.
>>>                                          self eek.
>>>                                          ok := false].
>>>                                  fieldOop := self fetchPointer: self
>>> freeChunkNextIndex ofFreeChunk: obj.
>>>                                  (fieldOop ~= 0
>>>                                  and: [(heapMap heapMapAtWord: (self
>>> pointerForOop: fieldOop)) = 0]) ifTrue:
>>>                                         [coInterpreter print: 'leak in
>>> free chunk '; printHex: obj; print: ' @ 0 = '; printHex: fieldOop; print: '
>>> is unmapped'; cr.
>>>                                          self eek.
>>>                                          ok := false].
>>>                                 (self isLargeFreeObject: obj) ifTrue:
>>>                                         [self freeChunkParentIndex to:
>>> self freeChunkLargerIndex do:
>>>                                                 [:fi|
>>>                                                  fieldOop := self
>>> fetchPointer: fi ofFreeChunk: obj.
>>>                                                  (fieldOop ~= 0
>>>                                                  and: [(heapMap
>>> heapMapAtWord: (self pointerForOop: fieldOop)) = 0]) ifTrue:
>>>                                                         [coInterpreter
>>> print: 'leak in free chunk '; printHex: obj; print: ' @ '; printNum: fi;
>>> print: ' = '; printHex: fieldOop; print: ' is unmapped'; cr.
>>>                                                          self eek.
>>>                                                          ok := false]]].
>>>                                 total := total + (self bytesInObject:
>>> obj)]
>>>                         ifFalse:
>>>                                 [0 to: (self numPointerSlotsOf: obj) - 1
>>> do:
>>>                                         [:fi|
>>> +                                        (self isForwarded: obj)
>>> +                                               ifTrue:
>>> +                                                       [self assert: fi
>>> = 0. "I'm now trying to use forwarders in GC algorithms..."
>>> +                                                        fieldOop :=
>>> self fetchPointer: fi ofMaybeForwardedObject: obj]
>>> +                                               ifFalse: "We keep
>>> #fetchPointer:ofObject: API here for assertions"
>>> +                                                       [fieldOop :=
>>> self fetchPointer: fi ofObject: obj].
>>> -                                        fieldOop := self fetchPointer:
>>> fi ofObject: obj.
>>>                                          (self isNonImmediate: fieldOop)
>>> ifTrue:
>>>                                                 [(heapMap heapMapAtWord:
>>> (self pointerForOop: fieldOop)) ~= 0 ifTrue:
>>>                                                         [coInterpreter
>>> print: 'object leak in '; printHex: obj; print: ' @ '; printNum: fi; print:
>>> ' = '; printHex: fieldOop; print: ' is free'; cr.
>>>                                                          self eek.
>>>                                                          ok :=
>>> false]]]]].
>>>         total ~= totalFreeOldSpace ifTrue:
>>>                 [coInterpreter print: 'incorrect totalFreeOldSpace;
>>> expected '; printNum: totalFreeOldSpace; print: ' found '; printNum: total;
>>> cr.
>>>                  self eek.
>>>                  ok := false].
>>>         ^ok!
>>>
>>> Item was changed:
>>>   ----- Method: SpurMemoryManager>>detachFreeObject: (in category 'free
>>> space') -----
>>>   detachFreeObject: freeChunk
>>>         "This is a rare operation, so its efficiency isn't critical.
>>>          Having a valid prev link for tree nodes would help."
>>>         <inline: false>
>>>         | chunkBytes result |
>>>         chunkBytes := self bytesInObject: freeChunk.
>>>         result := self allocateOldSpaceChunkOfExactlyBytes: chunkBytes
>>> suchThat: [:f| f = freeChunk].
>>>         self assert: result = (self startOfObject: freeChunk).
>>> -       "Following is assertion only. Typical problem is that the free
>>> structures (tree/list) keep references to detached object somehow"
>>> -       self cCode: '' inSmalltalk:
>>> -               [self allOldSpaceFreeChunksDo:
>>> -                       [ :f | self assert: (self isValidFreeObject:
>>> f)]].
>>>         !
>>>
>>> Item was changed:
>>>   ----- Method: SpurSelectiveCompactor>>compactSegmentsToCompact (in
>>> category 'compaction') -----
>>>   compactSegmentsToCompact
>>>         "Forwards all objects in segments to compact and removes their
>>> freechunks"
>>>         | freeStart |
>>>         freeStart := segmentToFill segStart.
>>> +
>>> +        "Removes initial free chunk in segment to fill... (Segment is
>>> entirely free)"
>>> +       manager detachFreeObject: (manager objectStartingAt: freeStart).
>>> +
>>> +        "Compact each segment to compact..."
>>>         0 to: manager numSegments - 1 do:
>>>                 [:i| | segInfo |
>>>                  segInfo := self addressOf: (manager segmentManager
>>> segments at: i).
>>>                 (self isSegmentBeingCompacted: segInfo)
>>>                         ifTrue: [freeStart := self compactSegment:
>>> segInfo freeStart: freeStart ]].
>>>
>>>          "Final free chunk in segment to fill..."
>>>          manager
>>>                 addFreeChunkWithBytes: segmentToFill segSize - manager
>>> bridgeSize + segmentToFill segStart - freeStart
>>>                 at: freeStart.
>>>
>>> +        "Follow stack zone and caches..."
>>>         self postForwardingAction
>>>         !
>>>
>>> Item was changed:
>>>   ----- Method: SpurSelectiveCompactor>>findAndSetSegmentToFill (in
>>> category 'freeing') -----
>>>   findAndSetSegmentToFill
>>>         0 to: manager numSegments - 1 do:
>>>                 [:i| | segInfo firstEntity |
>>>                  segInfo := self addressOf: (manager segmentManager
>>> segments at: i).
>>>                  firstEntity := manager objectStartingAt: segInfo
>>> segStart.
>>>                  ((manager isFreeObject: firstEntity) and: [(manager
>>> objectAfter: firstEntity limit: manager endOfMemory) = (manager
>>> segmentManager bridgeFor: segInfo)])
>>> +                       ifTrue: [segmentToFill := segInfo. ^0]].
>>> -                       ifTrue: [segmentToFill := segInfo. manager
>>> detachFreeObject: firstEntity. ^0]].
>>>         !
>>>
>>> Item was changed:
>>>   ----- Method: SpurSelectiveCompactor>>freePastSegmentsAndSetSegmentToFill
>>> (in category 'freeing') -----
>>>   freePastSegmentsAndSetSegmentToFill
>>> +       "The first segment being claimed met becomes the segmentToFill.
>>> The others are just freed."
>>> -       "The first segment being claimed met becomes the segmentToFill.
>>> The others are just freed"
>>>         segmentToFill := nil.
>>>         0 to: manager numSegments - 1 do:
>>>                 [:i| | segInfo |
>>>                  segInfo := self addressOf: (manager segmentManager
>>> segments at: i).
>>>                  (self isSegmentBeingCompacted: segInfo)
>>>                         ifTrue:
>>> +                               [self freeSegment: segInfo.
>>> +                                segmentToFill ifNil: [segmentToFill :=
>>> segInfo]]]!
>>> -                               [segmentToFill
>>> -                                       ifNil: [segmentToFill := segInfo]
>>> -                                       ifNotNil: [self freeSegment:
>>> segInfo]]]!
>>>
>>> Item was changed:
>>>   ----- Method: SpurSelectiveCompactor>>selectiveCompaction (in
>>> category 'compaction') -----
>>>   selectiveCompaction
>>>         "Figures out which segments to compact and compact them into
>>> segmentToFill"
>>>         | atLeastOneSegmentToCompact |
>>>         self assertNoSegmentBeingCompacted.
>>>         atLeastOneSegmentToCompact := self computeSegmentsToCompact.
>>>         "If no compaction we don't pay forwarding cost (stack scan,
>>> cache scan, etc.)
>>>          and we don't allocate segmentToFill if none available."
>>>         atLeastOneSegmentToCompact
>>>                 ifTrue:
>>>                         [self assert: segmentToFill ~~ nil.
>>> +                        self compactSegmentsToCompact].
>>> -                        self compactSegmentsToCompact]
>>> -               ifFalse:
>>> -                       [segmentToFill ifNotNil: [self freeSegment:
>>> segmentToFill]].
>>>         manager checkFreeSpace: GCModeFull.!
>>>
>>> Item was added:
>>> + ----- Method: SpurSelectiveCompactorSimulator>>selectiveCompaction
>>> (in category 'compaction') -----
>>> + selectiveCompaction
>>> +       super selectiveCompaction.
>>> +       manager allFreeObjectsDo: [:objOop | manager
>>> assertValidFreeObject: objOop]!
>>>
>>>
>>
>>
>> --
>> Clément Béra
>> https://clementbera.github.io/
>> https://clementbera.wordpress.com/
>>
>
>
>
> --
> Clément Béra
> https://clementbera.github.io/
> https://clementbera.wordpress.com/
>
>
>


-- 
Clément Béra
https://clementbera.github.io/
https://clementbera.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20180426/f9335686/attachment-0001.html>


More information about the Vm-dev mailing list