[Vm-dev] Fwd: Memory mapped files and Pharo (aka Unreal Engine 4 integration with Pharo)

Eliot Miranda eliot.miranda at gmail.com
Sun Jan 17 21:54:07 UTC 2016


On Sun, Jan 17, 2016 at 1:12 PM, Dimitris Chloupis <kilon.alios at gmail.com>
wrote:

>
> Eliot as long as you integrate this, because my knowledge of VM is close
> to zero, and make it a standard part of the VM I am perfectly ok with it. I
> dont care about an optimal solution just a solution that works .
>

Right.  It would become a standard facility.


> All I want is the ability to do shared memory management from inside
> Pharo. I am very new to all this so I am not aware of the problems and
> limititations I will face. But I know that you are far better than me to
> provide a good implementation.
>
> I dont kid myself this will be a long process for me , it will take time
> to get familiar with shared memory management , short out potential bugs
> and dust off my knowledge about C++. So I am not in any hurry but willing
> to learn how to do this in practice, which means I will make some testing
> C++ apps to really get the hang of this.
>
> That means I can wait for the threaded VM and a potential implementation
> for shared memory access.
>

Good.  But remember this and be prepared to nag when the threaded FFI
arrives :-).  And if there are volunteers out there looking for an
interesting project that's not too big this is a candidate.


>
> So yeah your idea sounds great, the ability to GC shared memory sounds
> awesome though I am a bit worried how I will be able to alert my C++ code
> about this GC, so maybe a specific primitive would be better ?
>

Good point.


>
> On Sun, Jan 17, 2016 at 10:18 PM Eliot Miranda <eliot.miranda at gmail.com>
> wrote:
>
>>
>> Hi Dimitris,
>>
>> On Sun, Jan 17, 2016 at 8:57 AM, Dimitris Chloupis <kilon.alios at gmail.com
>> > wrote:
>>
>>>
>>> I attach the forwarded post I posted in pharo-users but if you want the
>>> summary its a collection of questions whether its possible to use memory
>>> mapped files shared memory with Pharo VM for IPC. I thought to forward also
>>> here since this may be a VM related too. Any help is appreciated
>>>
>>> ---------- Forwarded message ---------
>>> From: Dimitris Chloupis <kilon.alios at gmail.com>
>>> Date: Sun, Jan 17, 2016 at 6:41 PM
>>> Subject: Memory mapped files and Pharo (aka Unreal Engine 4 integration
>>> with Pharo)
>>> To: Any question about pharo is welcome <pharo-users at lists.pharo.org>
>>>
>>>
>>> An apology for the long post but this is not a simple issue and hence
>>> not a simple question.
>>>
>>> So I am looking into ways to integrate Pharo with Unreal to bring some
>>> nuclear powered graphics to Pharo and make us first class citizents to 2D
>>> and 3D world.
>>>
>>> For those not aware this is unreal
>>>
>>> https://www.unrealengine.com/what-is-unreal-engine-4
>>>
>>> As suprising it may sound Unreal and Pharo share a lot in common
>>>
>>> 1) Both integrated languages with IDEs
>>> 2) Both promote live coding and live manipulation of data
>>> 3) Both promote visual coding
>>> 4) Both promote ease of use and wizard based development (aka RAD)
>>>
>>> Unreal is indeed a C++ based project made to be used by C++ coders but
>>> it comes with a very powerful visual coding language that can map visual
>>> nodes to any C++ function/Method and another common ground with Pharo is
>>> the very strict OO nature of both projects. But then OOP is very big in
>>> game development anyway.
>>>
>>> Of course my goal is not to force Pharoers to learn C++, but rather make
>>> a Pharo API so Pharo can be used to do some simple graphics development in
>>> the start and then we can continue importing more and more functionality.
>>> Unreal is generally a heavy engine that requires quite a powerful GPU but
>>> its rendering is just amazing.
>>>
>>> So how I make Pharo talk to Unreal is the million dollar question.
>>>
>>> The road of compiling Unreal as a set of DLLs to be loaded by Pharo via
>>> FFI is a road full of thorns because its quite an undertaking cause Unreal
>>> is HUGE and it will be nightmare to maintain since Unreal moves forward
>>> very fast.
>>>
>>> So we come to the subject of IPC, or Inter Process Communication. I am
>>> not new to this as you know I have build a socket bridge between Pharo and
>>> Python that allows Pharo to use Python libraries and in my implementation I
>>> focus on Blender Python API.
>>>
>>> But sockets are not exactly blazzing fast, calling functions is fine
>>> because you can even get responses bellow 1 millisecond but if try to run
>>> some heavy loops you will be in trouble. There are work arounds of course,
>>> like sending the loop with the socket and language etc but they
>>> overcomplicate something that at least in my opinion should remain simple.
>>>
>>> Another IPC method is shared memory, it is what it says , basically
>>> processes that share a place in memory where they can access the same data.
>>> Extremely fast but it has its own traps for example how to make sure the
>>> processes dont access the same data at the same time etc.
>>>
>>> So after some reading I came across to a shared memory model called
>>> Memory mapped files, basically what that means is essentially a virtual
>>> file that resides on memory (it may also reside on the hard disk or other
>>> permanent physical storage but its not necessary) that different processes
>>> can access.
>>>
>>> So in our case we can have memory mapped file that Unreal , Pharo and
>>> even Blender can access where we can store command / messages that each
>>> diffirent application must execute, let them share data and sky is the
>>> limit.
>>>
>>> Now I know this will not be a walk in the park specially to someone like
>>> me a C++ noob but I am looking for the Pharo wisemen wisewomen to guide me
>>> at least through the obstacles of this.
>>>
>>> So the questions are the following
>>>
>>> 1) Do you think this is possible with the current Pharo ?
>>>
>>
>> You should be able to map a file via the FFI calling mmap, and access it
>> via an ExternalPointer.  But that's crappy.
>>
>>
>>> 2) Will I be limited by the fact that the VM currently does not
>>> multithread or cannot use multithreading libraries ? ( I have no intention
>>> of using multithreading but some handling of process access to the data may
>>> be necessary to make sure data is safe from concurent modification)
>>>
>>
>> As you've noticed we intend to provide a threaded FFI that will allow you
>> to use threaded libraries.  But that's not really the issue.  There must be
>> some handshaking protocol for the two halves to communicate via shared
>> memory without conflict.  The threaded FFI should make it possible to call
>> e.g. pthread_cond_wait to synchronise, but a lower-level test-and-set or
>> conditional move facility would be nicer.  See below.
>>
>> 3) Is there anything else I should be aware of as common pitfalls for
>>> such implementation ?
>>>
>>> 4) Can the current FFI help in this task or would I need to implement
>>> this as a C DLL and load it from Pharo
>>>
>>
>> Forgive me for diving down a bit before I answer your question, but I
>> find being concrete helps.
>>
>> Spur has a segmented memory model.  When one grows the heap the VM
>> allocates memory via mmap and integrates it with the rest of the heap by
>> using "bridge" objects at the end of each segment.  A bridge object is two
>> 64-bit words.  It says "I am a bytes object" so that the GC will never look
>> inside the object, and it lies about its size, saying "I am large enough to
>> span to the start of the next segment".  So when adding a new segment, the
>> bridge in the segment before the new one is "shortened" to point to the
>> start of the new segment, and a new bridge is added to the end of the
>> segment so that it points to the start of the next segment in memory.  The
>> last segment's bridge has a zero length.  Spur also provides pinning, as
>> simple as a per-object flag that tells the GC not to move an object.  So
>> bridges are pinned, and hence compaction leaves bridges unmolested, but any
>> object in old space can be pinned also.
>>
>> So an elegant way of providing shared memory in Spur would be to add a
>> "map a file as a byte array" primitive.  There are issues with this.  The
>> first 16 bytes of the mapped file would have to be used to construct the
>> header for a ByteArray that would comprise the rest of the segment,
>> excepting another 16 bytes at the end that would need to be a bridge.  So
>> if you wanted to share this with another application you'd probably want to
>> allocate a file that was, say, 2k bytes bigger than needed, and use the
>> first and last 1k bytes to hide the header and the bridge.
>>
>> Ah, better still would be to construct three objects in the segment, an
>> initial ByteArray that stretches to 16 bytes before the second page, a
>> ByteArray whose contents start at the beginning of the second page and
>> reach all the way to the penultimate page, and then a ByteArray to reach
>> from the end of the penultimate page to the bridge at the end of the
>> mmapped file segment.  All three objects could be pinned and be prevented
>> from being GC'ed until the entire segment was released.  That would give
>> you a ByteArray (the middle of the three) whose contents were all but the
>> first and last pages of the mmapped file, aligned on a file page boundary
>> and whose length was a multiple of the page size.  C++/C clients could then
>> map the central portion and use e.g. pointers to access it.
>>
>> To implement this there would be an mmap-file-as-segment primitive and
>> perhaps a release-segment primitive, or some magic in the GC to release the
>> segment when the ByteArrays were no longer accessed.
>>
>> So then I could imagine test-and-set or conditional-move primitives on
>> ByteArray that supported manipulating locks in a ByteArray, and by
>> extension, in the shared file.  I think this kind of approach would give
>> you the fastest, most direct access to shared memory I can think of.  Does
>> this appeal?
>>
>> PS: I have no intention of messing with the Pharo VM and I also want to
>>> avoid the use of plugins as I want this to work with standard Pharo
>>> distributions.
>>>
>>
>> _,,,^..^,,,_
>> best, Eliot
>>
>
>


-- 
_,,,^..^,,,_
best, Eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20160117/bda9d517/attachment.htm


More information about the Vm-dev mailing list