[Vm-dev] Re: [Pharo-dev] blog post

Florin Mateoc florin.mateoc at gmail.com
Mon Sep 9 14:58:14 UTC 2013


On 9/8/2013 1:34 PM, Eliot Miranda wrote:
>
>
> Well, VW's permSpace is a bit less than  a dedicated class space.  It is merely a 3rd generation of the oldest objects
> which just happens to contain classes and methods (and method dictionaries and Smalltalk and globals et al.  Yes this
> could be a good idea and yes this is potentially doable in Spur.  *But* it doesn't have to be done now.  The most
> important thing is to get a version of Spur that is much faster than the current system asap.  Don't let the perfect
> be the enemy of the good etc.
>

Sure, no pressure :)
I certainly did not mean it as in let's try to make it perfect from the start, but only as in, if it might influence
design considerations, let's keep it in mind.

>     I also assume that a dedicated space for pinned memory might also simplify the garbage collector and even help performance a bit.
>
>
> Indeed.  But... what do you think of this idea (again from the SpurMemoryManager class comment):
> "A segmented oldSpace is useful.  It allows growing oldSpace incrementally, adding a segment at a time, and freeing
> empty segments.  But such a scheme is likely to introduces complexity in object enumeration, and compaction
> (enumeration would apear to require visiting each segment, compaction must be wthin a segment, etc). One idea that
> might fly to allow a segmented old space that appears to be a single contiguous spece is to use fake pinned objects to
> bridge the gaps between segments.  The last two words of each segment can be used to hold the header of a pinned
> object whose size is the distance to the next segment.  The pinned object's classIndex can be one of the puns so that
> it doesn't show up in allInstances; this can perhaps also indicate to the incremental collector that it is not to
> reclaim the object, etc.  However, free objects would need a large enough size field to stretch across large gaps in
> the address space.  The current design limits the overflow size field to a 32-bit slot count, which wouldn't be large
> enough in a 64-bit implementation.  The overflow size field is at most 7 bytes since the overflow size word also
> contains a maxed-out 8-bit slot count (for object parsing).  A byte can be stolen from e.g. the identityHash field to
> beef up the size to a full 64-bits."
>
> So while the system might  indeed keep a segment that it puts pinned objects in preferentially, this wouldn't be
> visible to the image level.  It would only be an optimization in the VM.  Does that make sense?
>
>     To me it appears that this kind of specialization of memory (splitting it into dedicated spaces) would almost always be a win.
>     While it increases the complexity by increasing the number of components to manage, it also simplifies the components, since they need to handle fewer special cases.
>     But I have never implemented a vm :)
>
>
> Yes it makes sense.  But supporting it at the image level is work.  VW's ObjectMemory/MemoryPolicy is quite
> sophisticated and I don't want to make implementing something similar for Squeak/Pharo a gating issue for the release
> of Cog+Spur.  Again, don't let the perfect be the enemy of the good.  The first cut of the design,. the first release
> of the design, is not the final release.  Evolution is a constant.

I did not mean supporting it at the image level. This is also in response to not having the pinned objects segment
visible at the image level, which I think is perfectly fine.
Tempting as image level support may sound, I think the image should offer as high a level of abstraction of the machine
as possible, only breaking it when a specific and compelling reason is identified. In this case, pinning itself breaks
the abstraction, but it is justified by the need to inter-operate with the rest of the world.

I think exposing low-level details is a losing battle because the hosts are becoming more and more complex. If one
really wanted to be in control, one would have to interact with the OS' paging/swapping, even caches and
scheduling/processor affinity, let alone the fact that the OS itself might be virtualized. Or even the hardware. So
where do we stop? These external aspects can dominate actual performance so much as to render our knobs useless. We
should count us lucky if the VM/garbage collector can keep up with all these mechanisms and cooperate with them
themselves - speaking of which: would it be possible for the vm/garbage collector to handle its own paging/swapping, so
that the OS (which has much less specific information) does not interfere?
This is also in response to Tim's reminiscences of the good old days. I did have a lot of fun in my youth counting
processor cycles of the individual instructions to make sure my interrupt routines, running directly on the metal, with
no OS, could be serviced in the allotted time. It was a great feeling to be in such full control. I think those days are
gone.

You only mentioned in your blog tagging for the 32-bit world (I think).
Can you please add some comments about 64-bit tagging?

>
> Sure, I'll add stuff to the design sketch soon.  But the idea is to provide immediate 60-bit, or 61-bit floats to the
> 64-bit system that implement a subset of 64-bit IEEE double precision in the centre of the range (actually with the
> same range as single-precision 32-bit floats), the same as in 64-bit VW.
>

Thank you,
Florin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20130909/c4736bbf/attachment.htm


More information about the Vm-dev mailing list