Context Stack Speedup

Bryce Kampjes bryce at kampjes.demon.co.uk
Fri Apr 18 12:06:42 UTC 2003



I like your design a lot.

It really reminds me of optimized contexts and dynamic deoptimisation
needed for adaptive compilation.

Anthony Hannan writes:
 > First let me say that this design adds a new optional kernel class to
 > the image making images that use it incompatible with older VMs, but the
 > new VM will still be able to run older images and load old projects
 > (image segments).  So it is basically a backwards-compatible image
 > format change.  The reason I'm sacrificing forward compatibility is to
 > make the design simpler and more object-oriented from my point of view
 > of no distinction between the image and the VM.

How is this different to requiring a plugin for the module to work?
OK, the changes are inside the VM but both require changes outside of
the image. Requiring a plugin is fairly standard for low level
additions to Squeak. Code that requires a new plugin is forward
compatible only.

 > New Context Class
 > 
 > A ContextStack is a sequence of method contexts embedded in its
 > indexable fields (its stack).  ContextStacks, MethodContexts, and
 > BlockContexts are chained together forming a full execution stack.  When
 > the top context within a ContextStack is accessed it is popped out into
 > its own MethodContext and kept in the chain at its original position.
 > 
 > For example, suppose context A is a suspended context with B, C and D as
 > its senders in that order, and B, C and D all resides within the same
 > ContextStack Z.  Sending #sender to A will cause B to be popped out into
 > its own context and Z would only have C and D remaining in it.  If we
 > continue sending #sender until we reach the end, all will be in their
 > own context and no ContextStack will remain.
 > 
 > So anytime we need an object reference to a context it is separated out
 > into its own context.  So, the debugger, etc. will never see
 > ContextStacks since their frames are converted to contexts as they are
 > accessed (through #sender).
 > 
 > The VM will maintain contexts inside a single ContextStack, unless it
 > executes pushThisContext then it will separate out the current context
 > into its own context and start a new ContextStack after it.  Block
 > contexts and home contexts of blocks that return to it (^) also
 > execute pushThisContext so they will have their own contexts as well.

How does the VM know what contexts require their own contexts? Are you
adding a flag to MethodContext? For block creation separate bytecodes
could work but a send doesn't know what method will be sent.

I'm interested for a couple of reasons. One, the design looks very
similar to inlined contexts in a system with dynamic compilation. The
debugger interface screams dynamic deoptimisation to me. Two, to keep
things simple for those of us messing around in the interpreter it
would be nice to disable the ContextStacks. Three the incrementalist
in me really likes the idea of being able to switch off the optimization
selectively if it's buggy, though that probably isn't necessary here.

How different does the ContextStack look to the interpreter? Or how
widespread are the changes? Are they all isolated in the send and
return bytecodes or do all the stack bytecodes need to be modified?

Is it possible for a ContextStack to ever end up in a debugger? Say
if a context switch happens when execution is in a ContextStack then
a debugger is placed on the suspended process. Am I missing something?


What was slowing down block closures? Are there a few common cases that
could be easily optimized? Blocks that don't access surrounding closures
come to mind.

Bryce



More information about the Squeak-dev mailing list