[Vm-dev] Informing the VM that the display bits have changed via primitiveBeDisplay

tim Rowledge tim at rowledge.org
Tue May 2 17:27:09 UTC 2017


> On 02-05-2017, at 7:47 AM, Bert Freudenberg <bert at freudenbergs.de> wrote:
> 
> 
> 
>> On Tue, May 2, 2017 at 12:50 AM, Eliot Miranda <eliot.miranda at gmail.com> wrote:
> 
>> [snip]
> 
>> Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?
>> 
> Wouldn't the display bits still be moved around during GC? Nothing the image can do about that.
> 
> You could cache the values in the beDisplay primitive, but I don't see how that would change anything.
> 
> The original VM was single-threaded so this was not an issue. Are you trying to make the GC concurrent? I bet there are many many places that would break ... Maybe you need to temporarily pin the involved objects?

This is mostly a problem when the OS window drawing code is in a separate thread; if the VM is doing a GC and the window is moved on-screen the OS will send some event about the move, the thread will try to read the partly-processed Display values, they will be … odd… and strange things will happen.
The assorted sizes are going to be SmallInts (unless we have mind-bogglingly big displays in our near future) so they’re trivial to cache, and we  only have to worry about the bitmap. There’s a fairly tiny window of time when it might be getting moved, so maybe we could detect that and simply block using it for that period? Or recognise that the danger is when the old bitmap has been copied and we are now overwriting that space, so as soon as the copy of the bitmap is done we need to update the cached pointer. Err, it gets a bit more complex if the bitmap is being moved just a small distance such that the new copy writes over part of the old copy.

For RISC OS there’s a) no problem anyway because co-operative multi-tasking, b) I keep a separate pixmap for the OS to read anyway because the damn pixel format is 0BGR not ARGB. I was under the impression that macOS magically double-buffers anyway thus avoiding this (probably) but I can see iOS not doing that to save memory. After all, they only have several GB of ram and we all know that even a trivial texteditor needs 64Gb free space to open a 5 character document these days.

tim
--
tim Rowledge; tim at rowledge.org; http://www.rowledge.org/tim
Death is a nonmaskable interrupt. For now...




More information about the Vm-dev mailing list