[Vm-dev] Re: [Pharo-dev] [NB] Trying to implement a ReadStream on NBExternalAddress

Eliot Miranda eliot.miranda at gmail.com
Tue Sep 24 19:58:18 UTC 2013


On Mon, Sep 23, 2013 at 3:10 PM, Nicolas Cellier <
nicolas.cellier.aka.nice at gmail.com> wrote:

> Isn't Eliot just implementing this feature, having a segment of non
> relocatable objects?
>

Sort of.  Spur will allow any object in old space to stay still, and
through become can move any object to old space very simply.  So instead of
having a special fixed space segment it has a best-bit compaction algorithm
that doesn't slide objects, but moves them into holes.  With this kind of
compaction it is very easy to leave objects put.

A further advantage of Spur is that objects have 64-bit alignment so
passing arrays to e.g. code using sse instructions, won't cause potential
alignment faults.

But for NB see below on the hack that the ThreadedFFI uses.



>
> 2013/9/23 Igor Stasenko <siguctua at gmail.com>
>
>>
>>
>>
>> On 23 September 2013 21:40, Igor Stasenko <siguctua at gmail.com> wrote:
>>
>>>
>>>
>>>
>>> On 23 September 2013 16:23, Camillo Bruni <camillobruni at gmail.com>wrote:
>>>
>>>> Hi Jan,
>>>>
>>>> I think I will add the ByteArray accessor to NBExternalAddress today
>>>> or tomorrow since I need it as well for another project.
>>>>
>>>> hmm, reading from memory into bytearray can be done with memory copy:
>>>
>>> inputs: address , offset , size to read
>>>
>>> newAddress := NBExternalAddress value: address value + offset.
>>> buffer := ByteArray new: size.
>>> NativeBoost memCopy: newAddress to: buffer size: size.
>>>
>>> same way, writing, just swap the source and destination:
>>>
>>> newAddress := NBExternalAddress value: address value + offset.
>>> buffer "is given from somewhere".
>>> NativeBoost memCopy: buffer  to: newAddress size: size.
>>>
>>> but as Jan noted, you cannot tell to write starting at specified offset
>>> from/to bytearray, e.g.:
>>>
>>> copy from: address to: buffer + someOffset
>>> neither:
>>> copy from: buffer + someOffset to: someAddress
>>>
>>> this  where we need to introduce special 'field address' type, so you
>>> can construct it like this:
>>>
>>> offsetAddress := buffer nbAddressAt: offset.
>>>
>>> so then you can use it to pass to any function, which expects address,
>>> like memory copy
>>> or any foreign function.
>>>
>>> Since objects are moving in memory, we cannot calculate address of field
>>> before hand:
>>>
>>> address := NBExternalAddress value:  someObject address + offset.
>>>
>>> because if GC will happen, after computing such address and its actual
>>> use,
>>> you will read/write to wrong location.
>>>
>>
>> ** after computing and *before* actual use **
>>
>>
>>> Thus we should keep oop + offset up to the point of passing it to
>>> external function, under
>>> controllable conditions, that guarantee there's no GC is possible.
>>>
>>>
>>> Things would be much simpler if we could have pinning, isnt? :)
>>
>
Yes, but for the moment there is a hack one can use, a neat hack invented
by Andreas Raab.  The Squeak GC is a two-space GC, old space (collected by
fullGC) and new space (collected by incrementalGC).  An incrementalGC will
move objects in new space but leave objects on old space alone.  A
tenuringIncrementalGC will compact new space and then make new space part
of old space.  Therefore one way of nearly pinning objects is to do a young
GC to tenure objects into old space via tenuringIncrementalGC and then lock
fullGC, prevent fullGC from running, until the external call is finished.
 All arguments to the call become old, and they won't be moved until the
fuuGCLock is released. This doesn't help passing a buffer that will be used
after the call returns, but it does help a buffer being passed to code that
might callback.

See uses of PrimErrObjectMayMove, e.g. ThreadedFFIPlugin>>primitiveCallout
and
platforms/Cross/plugins/FilePlugin/sqFilePluginBasicPrims.c>>sqFileReadIntoAt

HTH
eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20130924/8f49f242/attachment.htm


More information about the Vm-dev mailing list