On 18 March 2010 07:26, James Foster Smalltalk@jgfoster.net wrote:
On Mar 17, 2010, at 8:45 PM, Igor Stasenko wrote:
On 18 March 2010 03:35, Chris Muller asqueaker@gmail.com wrote:
This is where I start to get confused. If "immutable" is a global attribute of each object, wouldn't there need to tend to need to be only one universal user of the bit at a time? Otherwise, the database might be setting the bit on an object for it's purposes, but for which the [your other favorite framework] in having its own purposes for the bit, might not want it set at that time...?
Good point. Really, what you would do, if you having two frameworks, sharing same object, and both want to use (set/reset) immutable bit for own purposes? This creates a conflict.
Yes it does, so you need to make a decision about which of the two frameworks you will use, or you need to look at using them in such a way that they don't interfere with each other. Consider the two most likely candidates for use of immutability (in my experience): compiler and database proxy. - With immutability, the compiler can share literals (strings, arrays, etc). - With immutability, a database framework can recognize modifications to objects copied from a remote system.
Immutability is not the only way how one can address the immutability of literals. Take a symbols, for instance: an implementation inherently maintains a set of symbols and makes sure they are not mutable and unique. This means, that we could do the same for any other kinds of literals: arrays, strings, characters, and share them without the need of introducing an immutable bit.
For the most part, these two uses will not conflict. The potential conflict would be where a compiler literal is passed to the database framework and written to a remote store. By default (say) the database framework would mark the object immutable and if a modification was attempted it would log it as 'dirty' (needing to be written in the next commit) and then change it to be mutable. This would corrupt the compiler usage.
The solution is to have the database framework notice on its first write that the object is immutable and keep it in its cache with a 'immutable' flag so that it would not accept an attempt to modify it.
do you mean, like this:
object beImmutable. database commit: [ object ]. object beMutable. object setFoo: bar. object beImmutable. database commit: [ object ].
the above is an example when object, recorded as an immutable one, then mutated outside a DB transaction. So db can't capture the attempt to modify it. What GemStone doing to handle this?
Another thing, i think, is how to allow nesting:
object beImmutable. [ [ object setFoo: bar ] on: AttemptToModifyImmutable do: [:ex | framework1 handleException: ex. ] ] on: AttemptToModifyImmutable do: [:ex | framework2 handleException: ex. ]
in this way, if framework1 were one who set immutable bit, it should handle it and continue as normally. but, if framework1 seen this object before as an immutable , then it should pass the exception to outer layer, who may handle it - framework2. But then it could be problematical to have changes made by framework2 be seen by framework1.
The general rule is that one can only make mutable something that one has earlier made immutable. With that, the risk of conflicts is much reduced.
In any case, the potential for a conflict is not a reason to deny the feature. One simply has to use it carefully and with knowledge of what the frameworks expect.
I'd say that while immutability is helping to make databases more fast, it actually has a much wider usability. All that OODBs need is to track the objects which were modified during transaction. Immutability is not the only way how to achieve that.
James