[Vm-dev] [Pharo-dev] Pony for Pharo VM

Shaping shaping at uurda.org
Sat Apr 18 09:15:54 UTC 2020


Just to get in the right frame of mind, consider that because of the Blub Paradox (http://www.paulgraham.com/avg.html) 
you are going to have a hard time convincing people to "change to this language because of Feature X"
just by saying so.  You need to dig deeper.  

 

The Pony compiler and runtime need to be studied.

What better way than to bring the Pony compiler into Squeak? Build a Pony runtime inside Squeak, with the vm simulator. Build a VM. Then people will learn Pony and it would be great!

 

Yes, that is one way.  Then we can simulate the new collector with Smalltalk in the usual way, whilst also integrating ref-caps and dynamic types (the main challenge).  We already know that Orca works in Pony (in high-performance production—not an experiment or toy).  Still there will be bugs and perhaps room for improvements.  Smalltalk simulation would help greatly there.  The simulated Pony-Orca (the term used in the Orca paper) or simulated Smalltalk-Orca, if we can tag classes with ref-caps and keep Orca working, will run even more slowly in simulation-mode with all that message-passing added to the mix.  

 

I’m starting to study the Pharo VM.  Can someone suggest what to read.  I see what appears to be outdated VM-related material.  I’m not sure what to study (besides the source code) and what to ignore.  I’m especially interested to know what not to read.



 I’m not trying to convince; I’m presenting facts, observations, and resources for study of the problem and its solution.  Hardware constraints now are intensely multicore, and everyone knows this.  The changing programming paradigm in apparent.  Hardware structure is forcing that change.  Convincing yourself will not be difficult when you have the facts.   You likely do already, at least on the problem-side.  

The solution is easy.

 

The problem is easy to understand.  It reduces to StW GCing in a large heap and how to make instead may small, well-managed heaps, one per actor.  Orca does that already and demonstrates very high performance.  That’s what the Orca paper is about.

 

The solution for Smalltalk is more complicated, and will involve a concurrent collector.  The best one I can find now is Orca.  If you know a better one, please share your facts.

 

As different event loops on different cores will use the same 

 

externalizing remote interface

 

This idea is not clear.  Is there a description of it?

 

to reach other event loops, we do not need a runtime that can run on all of those cores. We just need to start the minimal image on the CogVM with remote capabilities

 

Pony doesn’t yet have machine-node remoteness.  The networked version is being planned, but is a ways off still.  By remote, do you mean:  another machine or another OS/CogVM process on the same machine?  I think the Pony runtime is still creating by default just one OS process per app and as many threads as needed, with each actor having only one thread of execution by definition of what an actor is (single-threaded, very simple, very small).  A scheduler keeps all cores busy, running and interleaving all the current actor threads.  Message tracing maintains ref counts.  A cycle-detector keep things tidy.  Do Squeak and Pharo have those abilities?

 

to share workload.

 

With Pony-Orca, sharing of the workload doesn’t need to be managed by the programmer.  That’s one of basic reasons for the existence of Pony-Orca.  The Pony-Orca dev writes his actors, and they run automatically in load-balance, via the actor-thread scheduler and work-stealing, when possible, on all the cores.  Making Smalltalk work with Orca is, at this early stage, about understanding how Orca works (study the C++ and program in Pony) and how to implement it, if possible, in a Smalltalk simulator.  Concerning Orca in particular, if you notice at end of the paper, they tested Orca against Erlang VM, C4, and G1, and it performed much better than all.  

 

 

The biggest challenge, I think you would agree is the system/application design that provides the opportunities to take advantage of parallelism. It kinda fits the microservices arch. So, we would run 64 instances of squeak to take the multicore to town.

 

No, that’s much slower.  Squeak/Pharo still has the basic threading handicap:  a single large heap.

 

Here’s the gist of the problem again:  the big heap will not work and must go away, if we are to have extreme speed and a generalized multithreading programming solution.  

 

My current understanding is that Pony-Orca (or Smalltalk-Orca) starts one OS process, and then spawns threads, as new actors begin working.  You don’t need to do anything special as a programmer to make that happen.  You just write the actors, keep them small, use the ref-caps correctly so that the program compiles (the ref-caps must also be applied to Smalltalk classes), and organize your synchronous code into classes, as usual.  Functions run synchronous code.  Behaviours run asynchronous code.





 The issue is not whether to use Pony.  I don’t like Pony, the language; it’s okay, even very good, but it’s not Smalltalk.  I like Smalltalk, who concurrency model is painfully lame. 

Squeak concurrency model.

Installer ss
    project: 'Cryptography';
    install: 'CapabilitiesLocal'

What abilities does the above install give Squeak?

 

I like Orca because it works on many cores (as many as 64, currently) without a synchronization step for GC, and has wonderful concurrency abilities.  Pony and Orca were co-designed.  The deferred reference counts managed by Orca run on the messages between the actors (send/receive tracing).  GCs happen in Pony/Orca when each actor finishes its response to the last received message, and goes idle.  The actor then GCs all objects no longer referenced by other actors.  The runtime scheduler takes this time needed for each actor’s GCing into account.  No actor waits to GC objects.  An actor’s allocated objects’ ref counts are checked at idle-time, and unreferenced objects are GCed in an ongoing, fluid way, in small, high-frequency bursts, with very small, predictable tail latencies, as a result.  That’s very interesting if you need smoothly running apps (graphics), design/program real-time control systems, or process data at high rates, as in financial applications at banks and exchanges.

So your use of Pony is purely to access the Orca vm?

 

Orca is not a VM; it’s a garbage collection protocol for actor-based systems.  

 

I suggest using Pony-Orca to learn how Orca works, and then replace the Pony part of Pony-Orca with Smalltalk (dynamic typing), keeping the ref-caps (because they provide the guarantees).  I realize that this is a big undertaking.  Or:  write a new implementation of Orca in Smalltalk for the VM.  This is currently second choice, but that could change.

 

I think you will find the CogVM quite interesting and performant. 

 

--Not with its current architecture.

 

If the CogVM is not able to:

1) dynamically schedule unlimited actor-threads on all cores

2) automatically load-balance

3) support actor-based programs innately

4) guarantee no data-races

 

then, no, it is definitely not as interesting as the best concurrent collectors, like Orca, with an integrated type system and language.  Orca has been applied successfully to Pony.  Orca was also applied to the language Encore.  If CogVM can be changed to implement a concurrent collector, then CogVM is interesting.  That’s a big change.  The main value of CogVM now seems to be as a possible building/rebuilding tool for the VM itself.  

 

Did you study the Wallaroo leaning experience concerning performance?  

 

I’ve no interest in coding custom, one-off, multi-core apps (or settling for a much slower general solution, as in the Erlang-like concurrency model in Squeak).  Custom-coded multithreading is too costly and too error-prone.  It’s not fun, productive, or even needed, unless you really do need an extremely optimized concurrent solution for a specific domain.  I don’t want inter-process communication before inter-thread communication (much faster) has been exhausted.  The concurrent collector, Orca in this case, in conjunction with the ref-caps generalize the multicore solution, efficiently (that’s the point of it) for any actor-based program, and the zero-copy message passing gives much more speed than IPC.  The tiny heaps cause tiny pauses on async collection.  Runtime message tracing costs decrease as use of mutable types does.  Message tracing happens only because there are mutable types to track and eventually collect; none of that applies to immutable types.  See the test results in the paper for details.   

 

The issue is how most efficiently to use Orca, which happens to be working in Pony.  Pony is in production in two internal, speed-demanding, banking apps and in Wallaroo Labs’ high-rate streaming product.  Pony is a convenient way to study and use a working implementation of Orca.  Ergo, use Pony, even if we only study it as a good example of how to use Orca.  Some tweaks (probably a lot of them) could allow use of dynamic types.  We could roll our own implementation of Orca for the current Pharo VM, but that seems like more work than tweaking a working Pony compiler and runtime.  I’m not sure about that.  You know the VM better than I.  (I was beginning my study of the Pharo/OpenSmalltalkVM when I found Pony.)

Sounds like you might regret your choice and took the wrong path. 

I don’t see how you form that conclusion.  I’ve not chosen yet.

I seek the easiest integration/mutation path for a concurrent collector and ref-cap system.  

I can start with Pony or a Smalltalk VM simulator.  Either direction may be chosen.  Squeak/Pharo’s current architecture (it has one big heap) is not suitable for general, automatic, fast multithreading.  If all the VM C code can be simulated in Smalltalk before compiling it to an exe, then simulation may be the better path.

Come back to Squeak! ^,^

I see the Actors for Squeak page.  That is not a suitable implementation.  

I’ve not used Squeak since 2004, and don’t know its current state.  I assume that it does not have the four concurrency-related abilities listed above.  Does it?  

If you know, please share the current facts about Squeak’s concurrency abilities.  I prefer to skip the work needed to adapt Smalltalk to a concurrent collector like Orca, if those abilities already exist in Squeak /Pharo.

If most of what Squeak/Pharo offers is pleasant/productive VM simulation, much work still remains to achieve even a basic actor system and collector, but the writing of VM code in Smalltalk and compiling it to C may be much more productive than writing C++.  The C++ for the Pony compiler and runtime, however, already compiles and works well.  Thus, starting the work in C++ is somewhat tempting.  Can someone explain the limits of how the VM simulator can be used?  How much VM core C is not a part of what can be compiled from Smalltalk?  Can all VM C code be compiled from Smalltalk?

 

Shaping

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20200418/b0668184/attachment-0001.html>


More information about the Vm-dev mailing list