[Vm-dev] [Pharo-dev] Pony for Pharo VM

Robert robert.withers at pm.me
Sat Apr 18 13:25:45 UTC 2020


Hi Shaping,

On 4/18/20 5:15 AM, Shaping wrote:

> Just to get in the right frame of mind, consider that because of the Blub Paradox (http://www.paulgraham.com/avg.html)
> you are going to have a hard time convincing people to "change to this language because of Feature X"
> just by saying so.  You need to dig deeper.
>
> The Pony compiler and runtime need to be studied.
>
> What better way than to bring the Pony compiler into Squeak? Build a Pony runtime inside Squeak, with the vm simulator. Build a VM. Then people will learn Pony and it would be great!
>
> Yes, that is one way.  Then we can simulate the new collector with Smalltalk in the usual way, whilst also integrating ref-caps and dynamic types (the main challenge).  We already know that Orca works in Pony (in high-performance production—not an experiment or toy).  Still there will be bugs and perhaps room for improvements.  Smalltalk simulation would help greatly there.  The simulated Pony-Orca (the term used in the Orca paper) or simulated Smalltalk-Orca, if we can tag classes with ref-caps and keep Orca working, will run even more slowly in simulation-mode with all that message-passing added to the mix.

The cost of message passing reduces down when using the CogVM JIT. It is indeed somewhat slower when running in the simulator. I think the objective should be to run the Pony bytecodes on the jitting CogVM. This VM allows you to install your own BytecodeEncoderSet. Note that I was definitely promoting a solution of running Pony on the CogVM, not Orca.

>
>
> I’m starting to study the Pharo VM.  Can someone suggest what to read.  I see what appears to be outdated VM-related material.  I’m not sure what to study (besides the source code) and what to ignore.  I’m especially interested to know what not to read.

I would suggest sticking to Squeak, instead of Pharo, as that is where the VM is designed & developed. Here's a couple of interesting blogs covering the CogVM [1][2] regarding VM documentation.

>>  I’m not trying to convince; I’m presenting facts, observations, and resources for study of the problem and its solution.  Hardware constraints now are intensely multicore, and everyone knows this.  The changing programming paradigm in apparent.  Hardware structure is forcing that change.  Convincing yourself will not be difficult when you have the facts.   You likely do already, at least on the problem-side.
>
> The solution is easy.
>
> The problem is easy to understand.  It reduces to StW GCing in a large heap and how to make instead may small, well-managed heaps, one per actor.  Orca does that already and demonstrates very high performance.  That’s what the Orca paper is about.

The CogVM has a single heap, divided into "segments" I believe they are called, to dynamically grow to gain new heap space. The performance of the GC in the CogVM is demonstrated with this profiling result running all Cryptography tests. Load Cryptography with this script, open the Test Runner select Cryptography tests and click 'Run Profiled':

> Installer ss
>     project: 'Cryptography';
>     install: 'ProCrypto-1-1-1';
>     install: 'ProCryptoTests-1-1-1'.

Here are the profiling results.

>  - 12467 tallies, 12696 msec.
>
> **Leaves**
> 13.8% {1752ms} RGSixtyFourBitRegister64>>loadFrom:
> 8.7% {1099ms} RGSixtyFourBitRegister64>>bitXor:
> 7.2% {911ms} RGSixtyFourBitRegister64>>+=
> 6.0% {763ms} SHA256Inlined64>>processBuffer
> 5.9% {751ms} RGThirtyTwoBitRegister64>>loadFrom:
> 4.2% {535ms} RGThirtyTwoBitRegister64>>+=
> 3.9% {496ms} Random>>nextBytes:into:startingAt:
> 3.5% {450ms} RGThirtyTwoBitRegister64>>bitXor:
> 3.4% {429ms} LargePositiveInteger(Integer)>>bitShift:
> 3.3% {413ms} [] SystemProgressMorph(Morph)>>updateDropShadowCache
> 3.0% {382ms} RGSixtyFourBitRegister64>>leftRotateBy:
> 2.2% {280ms} RGThirtyTwoBitRegister64>>leftRotateBy:
> 1.6% {201ms} Random>>generateStates
> 1.5% {188ms} SHA512p256(SHA512)>>processBuffer
> 1.5% {184ms} SHA256Test(TestCase)>>timeout:after:
> 1.4% {179ms} SHA1Inlined64>>processBuffer
> 1.4% {173ms} RGSixtyFourBitRegister64>>bitAnd:
>
> **Memory**
>     old            -16,777,216 bytes
>     young        +18,039,800 bytes
>     used        +1,262,584 bytes
>     free        -18,039,800 bytes
>
> **GCs**
>     full            1 totalling 86 ms (0.68% uptime), avg 86 ms
>     incr            307 totalling 81 ms (0.6% uptime), avg 0.3 ms
>     tenures        7,249 (avg 0 GCs/tenure)
>     root table    0 overflows

As shown, 1 full GC occurred in 86 ms and 307 incremental GCs occurred for a total of 81 ms. All of this GC activity occurred within a profile run lasting 12.7 seconds. The total GC time is just 1.31% of the total time. Very fast.

>
>
> The solution for Smalltalk is more complicated, and will involve a concurrent collector.  The best one I can find now is Orca.  If you know a better one, please share your facts.
>
> As different event loops on different cores will use the same
>
> externalizing remote interface
>
> This idea is not clear.  Is there a description of it?

So I gather that the Orca/Pony solution does not treat inter-actor messages, within the same process to be remote calls? If each core has a separate thread and thus a separate event loop, it makes sense to have references to actors in other event loops as a remote actor. Thus the parallelism is well defined.

>
>
> to reach other event loops, we do not need a runtime that can run on all of those cores. We just need to start the minimal image on the CogVM with remote capabilities
>
> Pony doesn’t yet have machine-node remoteness.  The networked version is being planned, but is a ways off still.  By remote, do you mean:  another machine or another OS/CogVM process on the same machine?

Yes, I mean both. I also mean between two event loops within the same process, different threads.

>   I think the Pony runtime is still creating by default just one OS process per app and as many threads as needed, with each actor having only one thread of execution by definition of what an actor is (single-threaded, very simple, very small).  A scheduler keeps all cores busy, running and interleaving all the current actor threads.  Message tracing maintains ref counts.  A cycle-detector keep things tidy.  Do Squeak and Pharo have those abilities?
>
> to share workload.
>
> With Pony-Orca, sharing of the workload doesn’t need to be managed by the programmer.

When I said sharing of workload is a primary challenge, I do not mean explicitly managing concurrency, the event loop ensures that concurrency safety. I meant that the design of a parallelized application into concurrent actors is the challenge, that exists for Smalltalk capabilities and Pony capabilities. In fact, instead of talking about actors, concurrency & parallel applications, I prefer to speak of a capabilities model, inherently on an event loop which is the foal point for safe concurrency.

>   That’s one of basic reasons for the existence of Pony-Orca.  The Pony-Orca dev writes his actors, and they run automatically in load-balance, via the actor-thread scheduler and work-stealing, when possible, on all the cores.  Making Smalltalk work with Orca is, at this early stage, about understanding how Orca works (study the C++ and program in Pony) and how to implement it, if possible, in a Smalltalk simulator.  Concerning Orca in particular, if you notice at end of the paper, they tested Orca against Erlang VM, C4, and G1, and it performed much better than all.

I suppose it should be measured against the CogVM, to know for sure is the single large heap is a performance bottleneck as compared to Pony/Orca performance with tiny per-actor heaps.

> The biggest challenge, I think you would agree is the system/application design that provides the opportunities to take advantage of parallelism. It kinda fits the microservices arch. So, we would run 64 instances of squeak to take the multicore to town.
>
> No, that’s much slower.  Squeak/Pharo still has the basic threading handicap:  a single large heap.

In my proposal, with 64 separate squeak processes running across 64 cores, there will be 64 heaps, 1 per process. There will be a finite number of Capability actors in each event loop. This finite set of actors within one event loop will be GC-able by the global collector, full & incremental. As all inter-event loop interaction occurs through remote message passing, the differences between inter-vat (a vat is the event loop) communication within one process (create two local Vats), inter-vat communication between event-loops in different processes on the same machine and inter-vat communication between event-loops in different processes on different machines are all modeled exactly the same: remote event loops.

>
>
> Here’s the gist of the problem again:  the big heap will not work and must go away, if we are to have extreme speed and a generalized multithreading programming solution.

I am not convinced of this.

>
>
> My current understanding is that Pony-Orca (or Smalltalk-Orca) starts one OS process, and then spawns threads, as new actors begin working.  You don’t need to do anything special as a programmer to make that happen.  You just write the actors, keep them small, use the ref-caps correctly so that the program compiles (the ref-caps must also be applied to Smalltalk classes), and organize your synchronous code into classes, as usual.  Functions run synchronous code.  Behaviours run asynchronous code.

My point was "writing the actors" and "organizing your synchronous code into classes" are challenging in the sense of choosing what is asynchronous and what is synchronous. The parallel design space holds primacy.

>>  The issue is not whether to use Pony.  I don’t like Pony, the language; it’s okay, even very good, but it’s not Smalltalk.  I like Smalltalk, who concurrency model is painfully lame.
>
> Squeak concurrency model.
>
> Installer ss
>     project: 'Cryptography';
>     install: 'CapabilitiesLocal'
>
> What abilities does the above install give Squeak?

This installs a local only (no remote capabilites) capabilities model that attempts to implement the following in Squeak, the E-Rights capabilities model. [3] This also ensures inter-actor concurrency safety.

>> I like Orca because it works on many cores (as many as 64, currently) without a synchronization step for GC, and has wonderful concurrency abilities.  Pony and Orca were co-designed.  The deferred reference counts managed by Orca run on the messages between the actors (send/receive tracing).  GCs happen in Pony/Orca when each actor finishes its response to the last received message, and goes idle.  The actor then GCs all objects no longer referenced by other actors.  The runtime scheduler takes this time needed for each actor’s GCing into account.  No actor waits to GC objects.  An actor’s allocated objects’ ref counts are checked at idle-time, and unreferenced objects are GCed in an ongoing, fluid way, in small, high-frequency bursts, with very small, predictable tail latencies, as a result.  That’s very interesting if you need smoothly running apps (graphics), design/program real-time control systems, or process data at high rates, as in financial applications at banks and exchanges.
>
> So your use of Pony is purely to access the Orca vm?
>
> Orca is not a VM; it’s a garbage collection protocol for actor-based systems.
>
> I suggest using Pony-Orca to learn how Orca works, and then replace the Pony part of Pony-Orca with Smalltalk (dynamic typing), keeping the ref-caps (because they provide the guarantees).  I realize that this is a big undertaking.  Or:  write a new implementation of Orca in Smalltalk for the VM.  This is currently second choice, but that could change.
>
> I think you will find the CogVM quite interesting and performant.
>
> --Not with its current architecture.
>
> If the CogVM is not able to:
>
> 1) dynamically schedule unlimited actor-threads on all cores

Why not separate actor event-loop processes on each core, communicating remotely? [4][5]

> 2) automatically load-balance

Use of mobility with actors would allow for automated rebalancing.

> 3) support actor-based programs innately

With this code, asynchronous computation of "number eventual * 100" occurs in an event loop and resolves the promise

> [:number | number eventual * 100] value: 0.03 "returning an unresolved promise until the async computation completes and resolves the promise"

Am I wrong to state that this model allows innate support to actors? Or were you somehow stating that the VM would need innate support? Why does the VM have to know?

> 4) guarantee no data-races

The issue to observe is whether computations are long running and livelock the event loop from handling other activations. This is a shared issue, as Pony/Orca are also susceptible to this. E-right's event loops ensure no data races, as long as actor objects are not accessible from more than one event-loop.

>
>
> then, no, it is definitely not as interesting as the best concurrent collectors, like Orca, with an integrated type system and language.  Orca has been applied successfully to Pony.  Orca was also applied to the language Encore.  If CogVM can be changed to implement a concurrent collector, then CogVM is interesting.  That’s a big change.  The main value of CogVM now seems to be as a possible building/rebuilding tool for the VM itself.
>
> Did you study the Wallaroo leaning experience concerning performance?
>
> I’ve no interest in coding custom, one-off, multi-core apps (or settling for a much slower general solution, as in the Erlang-like concurrency model in Squeak).  Custom-coded multithreading is too costly and too error-prone.  It’s not fun, productive, or even needed, unless you really do need an extremely optimized concurrent solution for a specific domain.  I don’t want inter-process communication before inter-thread communication (much faster) has been exhausted.

Imagine a cloud based compute engine, processing Cassandra events that uses inter-machine actors to process the massively parallel Cassandra database. Inter-thread communication is not sufficient as there are hundreds of separate nodes. Design wise, it makes much sense to treat inter-thread, inter-process and inter-machine concurrency as the same remote interface.

>  The concurrent collector, Orca in this case, in conjunction with the ref-caps generalize the multicore solution, efficiently (that’s the point of it) for any actor-based program, and the zero-copy message passing gives much more speed than IPC.  The tiny heaps cause tiny pauses on async collection.  Runtime message tracing costs decrease as use of mutable types does.  Message tracing happens only because there are mutable types to track and eventually collect; none of that applies to immutable types.  See the test results in the paper for details.
>
>>
>>
>> The issue is how most efficiently to use Orca, which happens to be working in Pony.  Pony is in production in two internal, speed-demanding, banking apps and in Wallaroo Labs’ high-rate streaming product.  Pony is a convenient way to study and use a working implementation of Orca.  Ergo, use Pony, even if we only study it as a good example of how to use Orca.  Some tweaks (probably a lot of them) could allow use of dynamic types.  We could roll our own implementation of Orca for the current Pharo VM, but that seems like more work than tweaking a working Pony compiler and runtime.  I’m not sure about that.  You know the VM better than I.  (I was beginning my study of the Pharo/OpenSmalltalkVM when I found Pony.)
>
> Sounds like you might regret your choice and took the wrong path.
>
> I don’t see how you form that conclusion.  I’ve not chosen yet.

You stated you are not thrilled with using Pony.

> I seek the easiest integration/mutation path for a concurrent collector and ref-cap system.
>
> I can start with Pony or a Smalltalk VM simulator.  Either direction may be chosen.  Squeak/Pharo’s current architecture (it has one big heap) is not suitable for general, automatic, fast multithreading.  If all the VM C code can be simulated in Smalltalk before compiling it to an exe, then simulation may be the better path.
>
> Come back to Squeak! ^,^
>
> I see the Actors for Squeak page.  That is not a suitable implementation.
>
> I’ve not used Squeak since 2004, and don’t know its current state.  I assume that it does not have the four concurrency-related abilities listed above.  Does it?
>
> If you know, please share the current facts about Squeak’s concurrency abilities.  I prefer to skip the work needed to adapt Smalltalk to a concurrent collector like Orca, if those abilities already exist in Squeak /Pharo.
>
> If most of what Squeak/Pharo offers is pleasant/productive VM simulation, much work still remains to achieve even a basic actor system and collector, but the writing of VM code in Smalltalk and compiling it to C may be much more productive than writing C++.  The C++ for the Pony compiler and runtime, however, already compiles and works well.  Thus, starting the work in C++ is somewhat tempting.  Can someone explain the limits of how the VM simulator can be used?  How much VM core C is not a part of what can be compiled from Smalltalk?  Can all VM C code be compiled from Smalltalk?
>
> Shaping

--
Kindly,
Robert

[1] Cog Blog - http://www.mirandabanda.org/cogblog/
[2] Smalltalk, Tips 'n Tricks - https://clementbera.wordpress.com/
[3] Capability Computation - http://erights.org/elib/capability/index.html
[4] Concurrency (Event Loops) - http://erights.org/elib/concurrency/index.html
[5] Distributed Programming - http://erights.org/elib/distrib/index.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20200418/57266f3a/attachment-0001.html>


More information about the Vm-dev mailing list