Squid plan

Jecel Assumpcao Jr jecel at merlintec.com
Sat May 17 02:22:31 UTC 2003


On Friday 16 May 2003 14:35, Anthony Hannan wrote:
> Squid segments.  Maybe MMU segments can be used to implement them if
> it would help performance or design (I don't know), but they may be
> too machine dependent.

The x86 is the only segmented architecture left on the market, I think. 
It does have a significant fraction of installed machines, but you are 
right that it is not a good idea to depend on this.

> What if I use msync?  It forces a write to disk the the address
> ranges you specify.

If you sync often enough, the probability of a disaster becomes very 
small (but never zero). Your performance goes out the window, however, 
so you might was well do it right.

> But I am open to trying your scheme as well (using a write log).  We
> should keep the implementation decision hidden in the Segment module.

Yes - there is no need to expose the whole system to this kind of 
implementation detail.

> I like it.  But I'm leaning towards replicating an object and its
> free children, not whole segments.  I know an object and its free
> children could actually define a segment, but I am defining a segment
> more broadly as any cut/region/subgraph of the global object graph. 
> Each segment resides on a specific machine.  Instead of segments
> being magically distributed/replicated, objects are. 

So I can get a replica of an object while its segment officially resides 
in another node? Where would it live in my node? It seems I would have 
to create a local segment just for it.

> This allows us
> to implement the magic in the object domain instead of at a
> lower-level (another example of moving thing out of the VM and into
> the image). 

You could also consider a proper reflective architecture. Then these 
things would be both in (meta-)objects and a lower-level.

> If we still want to group objects, we can add owner
> fields to the objects like you are suggesting.

I want as few fields as possible.

> Instead of a single roots array, there is one per outside segment
> that references it.  So each segment knows every other segment that
> has references to its objects.  When an object is no longer
> referenced locally but is still referenced from the outside it is
> moved to one of these segments and the others are updated to point to
> it.  If there are no more outside references then it is simply
> garbage collected.  Hence we have distributed garbage collection.

What if someone has a reference to segment A, which has a reference to 
segment B. Can they use that to get a reference to B without B knowing 
about it? Or will A create an indirect reference to B and pass that on, 
so that all accesses have to go through A? See 
http://www-sor.inria.fr/projects/sspc/

> > I keep all objects around forever.
>
> With no garbage collection, how do you free memory?

Just dump the least recently used segments. But I was exagerating - 
there is GC for non persistent objects.

> I checked it out.  Very interesting.  But it seems a little complex
> to me. Maybe your combining too many concepts (viewpoints,
> replication, versions) into one (segments).  My approach is to do it
> in layers. Segments first, then replication, then versions, then
> modules, etc.. All are independent concepts.  (I have not talked
> about replication and versions yet but I will).

Both alternatives have their advantages. Look at Xanadu vs 
html/http/tcp/ip. The integrated approach can easily fail, but when it 
works the resulting systems are smaller.

Michael van der Gulik wrote:
> Interesting. Don't know if compressing segments is worth the effort 
> though. Computers have lots of memory and big hard disks; if you're
> not being overly wasteful then a bit of overhead doesn't hurt anybody.

I am interested in $30 computers, so it does hurt me. And see 
http://www.cs.utexas.edu/users/oops/compressed-caching/index.html

I agree with most of the other stuff you wrote. My idea is not to fix 
policy, but to create mechanism and use it to build a default policy as 
an example.

-- Jecel



More information about the Squeak-dev mailing list