Hi Chris,
thanks yet again for your clarifications. I really appreciate it.
I tried the solution using the MagmaSessionRequest signalCommit: for my Repository store and save methods and signalNoteOldKeysFor: for the mutators which generated the keywords.
I only had to adapt my MagmaIntegrationTest and to add a WARequestFilter for my seaside app to bootstrap the MagmaSessionRequest.
Using this repositories abstraction, I even have a gui which uses the in memory version and a version which is using the Magma version. Since the gui only knows the Repositories interface (store: save: findxxx) methods, it just works in both cases. Which is nice.
Incidentally, there is a parallel "lack-of-transparency" in standard
Smalltalk and even, if I am not mistaken, Java. If you were to use a standard Dictionary / HashTable to look up objects, when something causes an objects #hash / hashCode() to change, the Dictionary must be rehashed; I actually cannot remember if this is true in Java but it is with Smalltalk.
You are absolutely right, same in Java world. Updating a field which influences the hash key, does not automatically update any collection where this hash happens to be used. I was just again thinking in the SQL world , where if you update a column which has an index on it defined, the index gets updated automatically.
Thats error prone. That is the reason I tried to do that completely
automatically.
You set up it correct one time and move on, just don't have to think
about
it again.
It is an unfortunate piercing of the transparency. For me its impact has been very limited and isolated; but then, I mostly only use MagmaCollections for keyword searching my domain models. I put a #signalNoteOldKeysFor: in my #description: setter and, done.
I'm doing the same. If all the rest of Magma is transparent, it is still very impressive. You just do some initial setUp and further more, just expand the domain.
The index I'm trying to create makes this more error prone: I have a root object which is in a MagmaCollection. It has a keywords index. The
keywords
are made up of a list which contains multiple fields of the root object
but
also of a list of fields of the sub objects of this root. Using this, I
can
implement a kind of google search box in which you can find keywords dispersed over my entire domain and let Magma retrieve the correct
aggegrate
root, and I would not have to integrate any kind of full text search technology to do this. The reason I wanted it to be transparent is that
if I
for instance add a new field on this keywords list, I now have to
remember
the signalNoteKeys on this setter, (which is error prone ...).
Ah, thank you for the explanation. May I offer a couple of suggestions?
Option 1) Can the "sub-objects" know their parent / "root" object? If so, instead of indexing your root object by it's keywords and all of its sub-objects key-words, just make the root object respond with its *own* keywords. Make each sub-object respond with its *own* keywords too. Add the sub-objects to the MagmaCollection as well as the root object. When a Reader of objects is found, display them all of them (a heterogeneous list) or, if the sub-objects are not wanted, traverse up their parents to the "roots" and put them into a Set (to avoid presenting duplicates to the user). This way, you only need #signalNoteOldKeysFor: in just the keywords setter of each type of object that can be searched.
I will try this option.
Option 2) Throw the MagmaCollection, indexes, and all the calls to #signalNoteOldKeys: out the window. Implement #maContextKeywordsDo: on all domain objects you want to have keyword search capability (otherwise, the printString of the object will act as its keywords!). As in Option 1, each domain object only values the passed in Block with its *own* keywords.
You can then send, #maNewSearchContext to any object in order to search its "sub-objects". This search object runs in the background, provides progress indication, provides results as they are found (so research can begin concurrently), and even orders the results according to how well they matched (e.g., whole match first, then front match, finally substring match). The results, themselves, are a searchable context. Multiple contexts can even be grouped into "CompositeContexts" so that one keyword can search a number of sources.
To learn more about this, start at the class MaAbstractContext, its class comment. This framework is pretty easy to use and it works.
I have a question about this. I looked into it, and I'm a bit concerned about scalability. I've did some testing and Magma can easily using a MagmaCollection, and an index on it, search through a million records and find the first 20 matching ones in 250 milliseconds. Thats actually pretty fast !! And since Magma is also scalable in the number of nodes you can add, you can more or less guarantee that every user can have a response time which is in that range. Which is very nice!
So now the question. I noticed that this context framework uses a collectionreader, enumerates all elements in database, and sees if the the keywords match (with a match percentage). So it can be that the one you're actually looking for is just not found, since there is a timeout on the searching time (5 minutes - I don't know anybody that is willing to wait that long for a response). Is this correct what I'm saying, or did I not completely understand it?
Man, I sure wish more folks had your good patience! Someone else recently said Magma is "dog-slow". It may be, but I like dogs. :) Hopefully Cog will help someday!
Thank you again for your never ending effort explaining and improving Magma!
Kind Regards,
Bart