<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Eliot, you wrote this post on your phone? That's non-trivial. I'm
hoping you used a bluetooth keyboard.<br>
<br>
You asked me to join the VM development effort. Thank you so much
for the welcome, I'd love to help where I can.<br>
<br>
Best,<br>
Robert<br>
<br>
<div class="moz-cite-prefix">On 10/22/2015 03:05 AM, stepharo wrote:<br>
</div>
<blockquote cite="mid:56288AB9.2070504@free.fr" type="cite">
<pre wrap=""> </pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
+1 <br>
(I'm amazed that you can type all that on an iPhone). <br>
<br>
Eliot I hope that the blog and the lecture clement is designing
will help people to get closer to the VM. <br>
I would love that Inria hires a VM researcher and that the rmod
group gets exposed and push the VM. It did not work with Stefan
Marr<br>
which could have got a position but was home sick :). <br>
If you know people that have a good cv and want to live in France,
we can fight to get a permanent life researcher position. <br>
<br>
Stef<br>
<div class="moz-cite-prefix">Le 19/10/15 16:10, Eliot Miranda a
écrit :<br>
</div>
<blockquote
cite="mid:6FD285AF-D7D9-43F3-B662-122B29B2E8D6@gmail.com"
type="cite">
<pre wrap=""> </pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<meta http-equiv="content-type" content="text/html;
charset=utf-8">
<div>Hi Robert,<br>
<br>
<span style="background-color: rgba(255, 255, 255, 0);">_,,,^..^,,,_
(phone)</span></div>
<div><br>
On Oct 19, 2015, at 5:03 AM, Robert Withers <<a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:robert.w.withers@gmail.com"><a class="moz-txt-link-abbreviated" href="mailto:robert.w.withers@gmail.com">robert.w.withers@gmail.com</a></a>>
wrote:<br>
<br>
</div>
<blockquote type="cite">
<div><span>Hi Esteban,</span><br>
<span></span><br>
<span></span><br>
<span>On 10/19/2015 05:10 AM, Esteban Lorenzano wrote:</span><br>
<blockquote type="cite"><span></span><br>
</blockquote>
<blockquote type="cite"><span>Hi,</span><br>
</blockquote>
<blockquote type="cite"><span></span><br>
</blockquote>
<blockquote type="cite"><span>just to be clear.</span><br>
</blockquote>
<blockquote type="cite"><span>When we talk about MTVM, we
talk about a MT-FFI, *not* a MTVM in general.</span><br>
</blockquote>
<blockquote type="cite"><span>In general, a “common”
approach to MT cannot be applied in Pharo (or Smalltalk
in general) and to get a VM *and* an image working
properly is an effort that makes what I called massive
some mails above like a small stone compared to a
mountain.</span><br>
</blockquote>
<span></span><br>
<span>Could you please help me by talking further about the
different models and scopes of what is meant by MT?</span><br>
<span></span><br>
<span>a) MT-FFI I believe gives the developer a way to call
and be invoked on callback, asynchronously. Is it so?</span><br>
</div>
</blockquote>
<div><br>
</div>
It's a bit more than that. It is the sharing of the VM between
different threads, but only allowing one thread to own the VM at
any one time, changing ownership in call out or Smalltalk
process switch time. This approach provides interleaved
concurrency but not parallelism in your Smalltalk code and it
means the Smalltalk class library doesn't have to be made
thread-safe, which as Esteban said is a huge task.
<div><br>
</div>
<div>See
<div><a moz-do-not-send="true"
href="http://lists.gforge.inria.fr/pipermail/pharo-project/2011-January/038943.html">http://lists.gforge.inria.fr/pipermail/pharo-project/2011-January/038943.html</a></div>
<div><br>
</div>
<div>and google "eliot Miranda Simmons own thread" to find
more messages.</div>
<div><br>
</div>
<div><br>
<blockquote type="cite">
<div><span>b) General MTVM means other system services are
threaded, like I/O events and scheduling and
heartbeat.</span><br>
</div>
</blockquote>
<div><br>
</div>
No; at least not in my opinion. In the standard
single-threaded VM the heartbeat is ideally a thread (it can
be an interval timer, but that's problematic; system calls
get interrupted), and maybe an incremental global GC could
be in its own thread.</div>
<div><br>
</div>
<div>So I'm defining the MTVM to be the sharing of the VM
between threads, and /not/ just the use of threads to
implement non-Smalltalk sub tasks of the VM, and /not/ a
full-blown multithreaded Smalltalk VM providing concurrent
execution of Smalltalk processes in parallel.</div>
<div><br>
<blockquote type="cite">
<div><span>I think that the right model (my
stack/priQueue/pool intuition?) will change a
Herculean task into a fairly straightfoward task and
achievable. Change the problem, to get better answers.</span><br>
</div>
</blockquote>
<div><br>
</div>
This has been well thought through and discussed. The
definition above is very useful. It provides a system that
can inter operate with concurrent code without having to
implement a system that provides parallelism. It is used in
David Simmons' VMs for S# etc and a similar (but less
performant) scheme is available in Python VMs.</div>
<div><br>
</div>
<div>Please, let's get this scheme working first. I'm not at
all happy (read, extremely unhappy) that there is not much
focus on working together to get our current VM to an
excellent state and instead lots of work on other VMs that
is speculative and a long way away from being production
ready. We have a huge amount of work to do on Cog:</div>
<div><br>
</div>
<div>- event-driven VM (that hence costs 0% processor time at
idle)</div>
<div>- 64-bits (x64 and ARM and...?)</div>
<div>- Sista adaptive optimizer</div>
<div>- FFI via dynamic generation of marshaling code, as
required for efficient and correct call outs on x64</div>
<div><span style="background-color: rgba(255, 255, 255, 0);">-
MTVM as defined above</span></div>
<div><span style="background-color: rgba(255, 255, 255, 0);">-
an incremental global mark-sweep GC for Spur</span></div>
<div>- running on Xen/Unikernels/containers</div>
<div>- providing a JavaScript plugin to proved rendering and
events so we can run an efficient VM in a web browser</div>
<div>- a port of the Interpreter/Context VM to Spur</div>
<div><br>
</div>
<div>IMO, things that can /and should/ wait are</div>
<div>- throwing away Slang and providing a true
written-in-pure-Smalltalk VM that is self-bootstrapped a la
Gerardo Richarte and Xavier Burroni</div>
<div>- a truly parallel multi/threaded VM</div>
<div><br>
</div>
<div>and things we shouldn't go anywhere near are</div>
<div>- using libffi</div>
<div>- targeting JavaScript, Java or any other dynamic
language de jour that happens to run in a web browser but
either provides abysmal performance or doesn't support full
Smalltalk semantics</div>
<div>- implementing the VM in other VM frameworks such as PyPy
which simply strengthens that community and weakens our own</div>
<div><br>
</div>
<div>Right now there are only a handful of people who make
commits to the VM and three who are "full time", and we're
all overloaded. But the VM is the base of the pillar and if
we want to provide high-quality solutions that people will
pay money to use we have to have a high-quality VM. In Spur
we have a VM that is significantly faster that VW, and very
reliable. In Sista we will have a system that is much
faster and can be improved upon for years to come and a
system that can migrate to future VMs (because it is mostly
Smalltalk), and useful support for a high quality FFI.
People like have stepped up and made significant
contributions to give us what is a respectable VM that is on
an arc to providing a really high-quality production
Smalltalk VM written in Smalltalk produced by a very small
community. But it is now 2015 and Cog started 7 years ago.
All the work on other VMs, deployment platforms etc, IMO
dilutes and delays in delivering to our community a truly
world-class VM that we can compete with against Java
HotSpot, node.js v8, lua luajit, factor, swift et al.
Please get on board. We'd love the help and we can
guarantee you'll have fun and you can guarantee you'll have
an impact.</div>
<div><br>
<blockquote type="cite">
<div><span></span><br>
<span>I appreciate you and this MT discussion.</span><br>
<span></span><br>
<span></span><br>
<blockquote type="cite"><span>Said that:</span><br>
</blockquote>
<blockquote type="cite"><span></span><br>
</blockquote>
<blockquote type="cite"><span>- What is in plans is
MT-FFI, and that will be available eventually.</span><br>
</blockquote>
<blockquote type="cite"><span>- There is an approach I
want to re-work, that would allow us profit of
multicores without going multithread: the “hydra”
experiment made some years ago by Igor creates a
good basis to this. But is also a lot of of work
(but a lot less than a complete MT), and not a real
priority for now… I hope to resume work on that area
some day… just not anytime soon.</span><br>
</blockquote>
<span></span><br>
<span>Yes, please. I recall those discussions. Hydra is
cosmological.</span><br>
<span></span><br>
<span>Regards,</span><br>
<span>Robert</span><br>
<span></span><br>
<blockquote type="cite"><span></span><br>
</blockquote>
<blockquote type="cite"><span>Esteban</span><br>
</blockquote>
<blockquote type="cite"><span></span><br>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>On 18 Oct 2015, at
17:56, Ben Coman <<a moz-do-not-send="true"
href="mailto:btc@openinworld.com">btc@openInWorld.com</a>>
wrote:</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>On Sat, Oct 17, 2015 at
2:25 AM, Robert Withers</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:robert.w.withers@gmail.com"><a class="moz-txt-link-abbreviated" href="mailto:robert.w.withers@gmail.com">robert.w.withers@gmail.com</a></a>>
wrote:</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>Yes, exactly. I do
realize I was consciously changing that effort</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>synchronization order.</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>I see 64-bit being
higher priority than multi-threaded for the wider</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>community. Dealing with
larger in-Image data opens the door to more</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>corporate
project/funding opportunities. Also simplifying
the install</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>on modern Linux
platforms without requiring additional 386
libraries</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>will help acceptance
there.</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>It is my humble
opinion, without really knowing, that 64-bit
would have to be redone after the MTVM
completes.</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>I would assume it was
the other way around. Presuming that Eliot has</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>sponsors influencing his
priorities, it seems given that 64-bits will</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>happen first. I would
guess any MTVM development on the old vm would</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>then need to be
reworked.</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>I was doing so with
the idea in mind that I and others</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>might dig into working
on the VM, for threading support, while Eliot</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>maintains focus on
64-bits...a tall order, I know.</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>The usual downside of
splitting resources applies. There are not that</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>many "others" and maybe
they would be drawn away from helping with the</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>64-bit vm. If the
64-bit vm goes slower for lack of resources then</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>your footing for MTVM
will shifting for a longer time. You may</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>ultimately get where you
want to go faster by helping with the 64-bit</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>vm. The rapport built
with other vm devs from working on 64-bit might</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>could then be applied to
MTVM. (Of course, its your free time, so you</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>should pursue what
interests you.)</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>I was barely familiar
with the VM, slang, interpreter, it years ago...</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>I'm totally unfamiliar
with cog.</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>The experience you gain
from working beside Esteban and Eliot on</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>64-bit Cog/Spur could
then be applied to a MTVM.</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>btw, you may find these
threads interesting...</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>* <a
moz-do-not-send="true"
class="moz-txt-link-freetext"
href="http://lists.pharo.org/pipermail/pharo-dev_lists.pharo.org/2015-April/108648.html"><a class="moz-txt-link-freetext" href="http://lists.pharo.org/pipermail/pharo-dev_lists.pharo.org/2015-April/108648.html">http://lists.pharo.org/pipermail/pharo-dev_lists.pharo.org/2015-April/108648.html</a></a></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>* <a
moz-do-not-send="true"
class="moz-txt-link-freetext"
href="http://forum.world.st/Copy-on-write-for-a-multithreaded-VM-td4837905.html"><a class="moz-txt-link-freetext" href="http://forum.world.st/Copy-on-write-for-a-multithreaded-VM-td4837905.html">http://forum.world.st/Copy-on-write-for-a-multithreaded-VM-td4837905.html</a></a></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span>cheers -ben</span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>I believe another item
on that list ought to be modernizing slang. So</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>many big items!</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>Robert</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>On 10/16/2015 12:48
PM, Stephan Eggermont wrote:</span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>On 16-10-15 14:05,
Robert Withers wrote:</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>Because of that
assumption I've made and without the
responsibilities</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>you have, Esteban,
but recognizing modernizing NB to FFI, my
desired</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>list is:</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>I would expect the
least total effort to be needed by keeping the
work</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>of Esteban and Eliot
as much as possible aligned. That is what
Esteban's</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>list achieves.</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span>Stephan</span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><span></span><br>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite"><span></span><br>
</blockquote>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>