"Lex Spoon" lex@cc.gatech.edu wrote: [SNIP]
A small clarification, here: what I believe Lex and I were talking about was not "multiple connected servers", but multiple independent servers. The client should be able to integrate the information from these various servers, but I don't believe that we need or want the servers themselves to be aware of each other.
Sure, but the problems are mostly the same - how to manage a complex object model in a distributed manner. :)
Yes, I was thinking that you probably have to have the servers be indepedent, at least in one direction. You probably can't have the local server being *required* to make round trips to the global server. Only probably, though. I have not thought about it enough to close the door on that design possibility entirely. Mostly, it just seems right that the local repository should be entirely under its own control, and that we should be very careful about what requirements we place on it.
I agree, one design I have somewhere in the back of my head goes like this:
- Add code to SMAccount so that it can disconnect itself from the rest of the model (replacing the objects with proxies with UUIDs in them) and then be able to reconnect and resolve the UUIDs. This means a disconnected SMAccount can be serialized and sent to a server or downloaded from it.
- Skip the current ImageSegment code (well, we could keep it as a good mechanism to get a full map of course) and instead save each account on its own on disk.
- A server can optionally know a "parent server". If it doesn't then it is on its own totally. If it does, it can regularly connect and download any modified accounts since last time.
- When you "commit" your changes you simply send your full SMAccount to a selected server. This means you make your modifications (creating releases, whatever) in your own image and then using a simple HTTP post or whatever, you smack your full account up on the server.
The above is pretty simple to do - we could start with a "hard" requirement that the SMAccount should be able to resolve all proxies or else it will abort. One downside with the above is that you would typically need one account for each server that you like to publish on. A likely scenario is 3-4 levels (global, department/company, local server, localhost) which would mean having separate accounts for these. I have been toying with going fine granular letting each SMObject know which server it "belongs" to, and thus letting you get away with having a single SMAccount - but when you commit it - it will filter itself appropriately and only send a partial copy without any SMObjects that don't belong for the given server.
Anyway, something like the above seems pretty doable.
Being disconnected does change the design problems in one way: unique identifiers can no longer be guaranteed. One general strategy for approaching this, is that if the local guy defines an identifier identical to what the global server has, then the local guy is *overriding* the global server -- just like with class inheritance.
Well, I think the model don't need that - all SMObjects have UUIDs, but sure, there are a few ways to "clash" today, for example - package name is verified to be unique. I was thinking that any such conflicts should be noticed at "commit time" and if there is a conflict we simply abort and tell the user.
Lex
regards, Göran