lowspace signalling and handling issues
John M McIntosh
johnmci at smalltalkconsulting.com
Mon May 2 05:56:01 UTC 2005
On May 1, 2005, at 11:07 AM, Andreas Raab wrote:
>> If you take another look at what I wote I think you'll see that that
>> is exactly
>> what I was saying; with many processes in process, simply
>> interrupting the one
>> that happened to push the allocator over the limit isn't a sufficient
>> response.
>
> *Phew* Thanks, I'm relieved (I was trying to get to the server but I
> can't get to it right now).
>
>> So we're in agreement about the problem, let's try to find a good
>> solution.
>
> You know, sometimes I wish we'd have swap space to really utilize. One
> of the nice things about swap space is that degradation is continuous
> so it's not the sudden "boom - you're out of memory" situation but
> rather a graceful "starting to get tight ... getting tighter ... now
> we're really running into trouble" situation. And most times you're
> running out of patience and interrupt whatever was going on long
> before you ran out of swap space.
>
You could tag each process with an instance var that counts memory
allocations, or memory allocation rate, then in a low space condition
you slow down the
fastest consumer. If you recall in the past I had some code to record
dispatch time since there is only one place in the VM where the process
switch occurs, same thought
applies, then in the image you could have the lowspace logic consider
the fastest memory allocation consumers.
Perhaps tagging object allocation by process owner would be
interesting, could after a full GC know how much memory per process is
allocated...
>> Right now I think I'll find a good solution of aqueous caffeine
>> compounds in
>> elevated enthalpy dihydrogen monoxide.
>
> *grin*
>
> Cheers,
> - Andreas
>
>
--
========================================================================
===
John M. McIntosh <johnmci at smalltalkconsulting.com> 1-800-477-2659
Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com
========================================================================
===
More information about the Vm-dev
mailing list