[Vm-dev] Re: [Pharo-dev] Copy-on-write for a multithreaded VM

Levente Uzonyi leves at elte.hu
Sat Jul 18 00:09:14 UTC 2015


Now that the VM has segmented memory (Spur), it should be a lot easier to 
implement Erlang-style multiprocessing[1]. Something like what HydraVM 
did, but with part of the image, implemented as read-only segments, shared 
among all processes.
That way each process can have its own segments and own (super fast) GC. 
Communication between processes can be done via channels (like in Erlang 
and HydraVM).

Levente

[1] http://lists.squeakfoundation.org/pipermail/vm-dev/2010-January/003789.html

On Fri, 17 Jul 2015, Ben Coman wrote:

> I am curious what are the issues holding us back from a multithreaded
> VM being mainstream, since I had a passing thought about a strategy
> for a partial-multi-threaded-VM.
>
> I see multi-threading been prototyped before with RoarVM [1] & HydraVM
> [2], and the CogBlog project page [3] says "While multi-threading
> seems like an obvious and important direction to take the system in,
> making the VM multi-threaded per-se   **does not provide any benefit
> before the Smalltalk image is made thread-safe**   and that is
> probably more work than providing a multi-threaded VM.  Hence a
> potentially more profitable approach is to concentrate on federating
> multiple VMs running multiple images, communicating through the
> threaded FFI."
>
> So I agree it would be a big job to make the whole Image thread-safe,
> but I wonder if a useful subset would be threads that are unable to
> disturb the system state? This might cover a lot of needs for
> parallelism, for example:
> * Complex force layouts with Roassal
> * Web server worker threads
> * Screen rendering by #fullDraw: & #drawOn:
>
> A copy-on-write facility might provide this, such that any required
> system state changes require manual coding in the parent-thread to
> process a result object returned from the child-thread when it exits.
> This might be implemented by combining Spur's new features for:
> * immutability
> * lazy become forwarding
>
> Implementation might require an extra object header bit WriteCopied
> and a new class CopyOnWriteThread known by the VM.
>
> Whenever a child-COWT reads an object, if WriteCopied is not set, set
> both it and the Immutable bits.    Any thread writing to that object
> will trigger the existing immutability handler, which now
> additionally, when WriteCopied is set:
>  1. Creates a new WriteCopiedIndirection object holding
>         a. the old object
>         b. the newly written object
>         c. the thread that performed the write.
>  2. Sets the old object's forward-pointer to that.
>
> Later the forward-following code observes the LocalCopy bit and based
> on matching the current thread with that stored in 1c. unrolls the
> correct object.  The child-COWT leaves the WriteCopied bit set so as
> to avoid immutability being set again.  Actually that precludes using
> real-immutability in a COWT, so my description above is not ideal -
> but hopefully is enough to show intent.
>
> Could this avoid needing to make the whole image thread safe? and thus
> facilitate running Smalltalk worker threads across multiple CPUs?
>
>
> Actually, this might even be useful without running across multiple
> CPUs, since the cause of several intermittent Red-Screen-Of-Deaths was
> due to using multiple green-threads to improve UI interactivity.  (For
> example forking update of Monticello lists to allow typing into search
> boxes. Here #drawOn: had "aList size" returning different values
> halfway through the algorithm.  Where execution forked to provide this
> useful feature was the best place for code readability and intuitive
> understanding, but it was long path to trace through execution to
> discover the coupling between there and the rendering code.  Certainly
> the fix is less readable that the original.)
>
> [1] https://github.com/smarr/RoarVM
> [2] http://squeakvm.org/~sig/hydravm/devnotes.html
> [3] http://www.mirandabanda.org/cogblog/cog-projects/
>
> cheers -ben
>
>


More information about the Vm-dev mailing list