[Vm-dev] Maximum value of -stackpages VM parameter?

Phil B pbpublist at gmail.com
Thu Jun 15 20:10:35 UTC 2017


Thanks for the tip, I'll give that a shot.  Also, is it possible to check
the amount of stack usage from the image? (I.e. just to get a rough idea of
where things stand that's reasonably fast)


On Jun 12, 2017 7:24 PM, "Eliot Miranda" <eliot.miranda at gmail.com> wrote:

Hi Phil,

On Jun 12, 2017, at 2:25 PM, Phil B <pbpublist at gmail.com> wrote:


Thanks for the info, that's good to know.  I probably should have been
explicit in that I am only bumping it up this high to troubleshoot a rather
annoying startup bug in my code. When it crashes as a result of the stack
overflow the trace is pretty useless (iirc, about 1/2 a page of INVALID
REFERENCE so I'm mostly flying blind.)  Bumping up the limit is allowing me
to get a better view of where things are going wrong and I plan to drop
back once I've resolved it.

A better way to debug thus will be to set a breakpoint in the scavenger and
the GC on every GC.  Stack overflow in a language like Smalltalk where
activations are objects means that the heap grows as the stack grows.  (The
stack pages in the stack zone can be seen as an allocation cache for the
most recent activations, reducing the pressure on the GC).  So if run under
gdb (lldb on Mac) and you print the stack in each GC you should be able to
at least see where the infinite recursion is coming from before the system
runs out of memory:

(gdb) b doScavenge
breakpoint 1 set at NNNN
(gdb) commands 1
call printStackCallStackOf(framePointer)
(gdb) run myimage.image

You can use
(gdb) call pushOutputFile("stack.log")
to get the vm to send subsequent output to a file and
(gdb) call popOutputFile()
to close the log.


On Jun 12, 2017 4:43 PM, "Eliot Miranda" <eliot.miranda at gmail.com> wrote:

Hi Phil,

> On Jun 12, 2017, at 12:50 PM, Phil B <pbpublist at gmail.com> wrote:
> In trying to troubleshoot an issue, I needed to bump up the stackpages
parameter.  On 64-bit Linux, a value of 600 worked but 1000 segfaulted so I
was just wondering what the limit(s) are for it?

There are no explicit limits.  The set fault you're seeing is as a result
of the stack pages being allocated on the c stack.  When the number is high
the stack overflows and boom.

A word to the wise: too high a value and scavenging performance falls
(stack pages are implicitly roots into new space), and become performance
falls (all activations in stack space are scanned post become to avoid a
read barrier on inst var fetch).

The default value was 192, a value chosen to exceed qwaq server process
usage, but both at Cadence and in Spur profiling we found that was not a
good value and pulled it back to 64 (IIRC).

I'm curious as to why are you exploring such high values.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20170615/be590d94/attachment-0001.html>

More information about the Vm-dev mailing list