[Vm-dev] Is it possible to suspend the garbage collector?
Max Leske
maxleske at gmail.com
Mon Jan 11 11:21:47 UTC 2016
> On 10 Jan 2016, at 20:22, vm-dev-request at lists.squeakfoundation.org wrote:
>
> Hi Max,
>
> pre-Spur to avoid GC one has to a) grow memory by enough to do all the
> processing you're going to do and b) change the shrinkage parameter so the
> Vm won't shrink the heap back down before the processing is complete. To
> do b) I suggest you modify setGCParameters. vm parameters 24 sets the
> shrinkage threshold; see vmParameterAt:put:: "24 memory threshold above
> whichto shrink object memory (read-write)". growMemory. Hmmm, I had
> thoguht that there's a growMemoryBy: primitive in v3, but it appears there
> isn't. So simply allocate a ByteArray of the desired size and then GC to
> get rid of it. That should leave that much free space and then your load
> should proceed without needing to GC.
>
> Anyway, it's worth a try.
Thanks Eliot.
Setting the memory threshold helped. I’m still seeing one full GC which I’m trying to avoid. I’ve experimented with #setGCBiasToGrow: and #setGCBiasToGrowGCLimit: but I don’t fully understand what they do.
#setGCBiasToGrow: seems to turn memory growth on and off. But if this is turned off, how can the VM then allocate more memory?
#setGCBiasToGrowGCLimit: seems to control if the growth should trigger a full GC, which seems pretty much like what I need.
Unfortunately, while setting these options seems to have an influence, I can’t quite see the pattern, and that one full GC is still there. Maybe you could explain how these options work exactly?
One other question: the MessageTally output seems to be missing about 50% of the running time. Summing up over full GC + incremental GC + time spent in the tree, leaves about 500ms unaccounted for. Do you have any idea where that half second goes missing?
Here’s the code I use experimentally:
MessageTally spyOn: [
| size shrinkThreshold |
size := (self settings segmentsDirectory fileNamed: 'snapshot.bin') size.
shrinkThreshold := (Smalltalk vmParameterAt: 24).
Smalltalk vmParameterAt: 24 put: shrinkThreshold + (size*2). "(8MB + twice file size)"
Smalltalk setGCBiasToGrowGCLimit: shrinkThreshold + (size*2).
Smalltalk setGCBiasToGrow: 1. “enable growth??"
ByteArray new: size*2.
"incremental GC should take care of collecting the ByteArray, so I’m not doing anything
manually here"
<load snapshot> ].
Cheers,
Max
Output from current MessageTally:
- 1123 tallies, 1125 msec.
**Tree**
--------------------------------
Process: (40s) 123994112: nil
--------------------------------
12.7% {143ms} CBImageSegment class(NSImageSegment class)>>basicSnapshot:from:do:
12.6% {141ms} CBImageSegment class(NSImageSegment class)>>installSegmentFrom:andDo:
12.6% {141ms} CBImageSegment class(NSImageSegment class)>>readSegmentFrom:
12.6% {141ms} NSSegmentStream>>readObject
12.6% {141ms} SmartRefStream>>nextAndClose
12.6% {141ms} SmartRefStream>>next
12.3% {138ms} SmartRefStream(ReferenceStream)>>next
12.3% {138ms} SmartRefStream(DataStream)>>next
10.6% {119ms} CBImageSegment(ImageSegment)>>comeFullyUpOnReload:
|10.6% {119ms} CBImageSegment(NSImageSegment)>>restoreEndiannessAndRehash
| 5.5% {62ms} Dictionary>>rehash
| |2.8% {31ms} Dictionary>>associationsDo:
| | |2.2% {25ms} Array(SequenceableCollection)>>do:
| |1.7% {19ms} Dictionary>>noCheckAdd:
| | 1.7% {19ms} Dictionary(HashedCollection)>>findElementOrNil:
| | 1.2% {13ms} Dictionary>>scanFor:
| 4.5% {51ms} primitives
1.2% {13ms} SmartRefStream(DataStream)>>readArray
1.2% {13ms} SmartRefStream>>next
1.2% {13ms} SmartRefStream(ReferenceStream)>>next
1.2% {13ms} SmartRefStream(DataStream)>>next
**Leaves**
**Memory**
old +94,031,228 bytes
young -9,207,660 bytes
used +84,823,568 bytes
free +90,024,824 bytes
**GCs**
full 1 totalling 85ms (8.0% uptime), avg 85.0ms
incr 15 totalling 271ms (24.0% uptime), avg 18.0ms
tenures 10 (avg 1 GCs/tenure)
root table 0 overflows
>
> On Sat, Jan 9, 2016 at 3:03 AM, Max Leske wrote:
>
>>
>> Hi,
>>
>> I have a rather annoying problem. I’m running a time critical piece of
>> code that reads a big (~90MB) image segment from a file. I’ve optimized
>> loading as far as possible and now GC takes far longer than the loading
>> itself (see the MessageTally output below).
>> I’m wondering if there’s any possibility to defer garbage collection
>> during the load.
>>
>> For completeness, here’s the use case: the process is socket activated,
>> which means that the first request coming in will start the process. When
>> the image starts it will load the segment to restore the last state of the
>> application and, once that’s done, serve the request. The critical time
>> includes vm startup, image startup, starting the server in the image and
>> loading the snapshot. With a big snapshot the loading time of the snapshot
>> is the most significant contributor.
>>
>> Maybe I could preallocate the needed memory to prevent the garbage
>> collector from running?
>>
>> I’d appreciate any ideas you have.
>>
>>
>> Cheers,
>> Max
>>
>>
>> PS: This needs to run on a Squeak 4.0.3 VM (no JIT)
>>
>>
>>
>>
>> Output from MessageTally:
>>
>> - 1624 tallies, 1624 msec.
>>
>> **Tree**
>> --------------------------------
>> Process: (40s) 592969728: nil
>> --------------------------------
>> 4.4% {72ms} CBImageSegment class(NSImageSegment
>> class)>>basicSnapshot:from:do:
>> 4.4% {72ms} CBImageSegment class(NSImageSegment
>> class)>>installSegmentFrom:andDo:
>> 4.4% {72ms} CBImageSegment class(NSImageSegment
>> class)>>readSegmentFrom:
>> 4.4% {72ms} NSSegmentStream>>readObject
>> 4.4% {72ms} SmartRefStream>>nextAndClose
>> 4.4% {72ms} SmartRefStream>>next
>> 4.3% {70ms} SmartRefStream(ReferenceStream)>>next
>> 4.3% {70ms} SmartRefStream(DataStream)>>next
>> 3.2% {52ms}
>> NSImageSegment(ImageSegment)>>comeFullyUpOnReload:
>> 3.2% {52ms} NSImageSegment>>restoreEndiannessAndRehash
>> **Leaves**
>> 3.2% {52ms} NSImageSegment>>restoreEndiannessAndRehash
>>
>> **Memory**
>> old +92,704,656 bytes
>> young -8,008,252 bytes
>> used +84,696,404 bytes
>> free +1,287,768 bytes
>>
>> **GCs**
>> full 2 totalling 954ms (59.0% uptime), avg
>> 477.0ms
>> incr 5 totalling 165ms (10.0% uptime), avg 33.0ms
>> tenures 1 (avg 5 GCs/tenure)
>> root table 0 overflows
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20160111/e1248fe2/attachment-0001.htm
More information about the Vm-dev
mailing list