Some questions

Norberto Manzanos nmanzanos at gmail.com
Thu Jul 6 20:41:33 UTC 2006


Hi Chris and Magma people.

Finally I am uploading the data from my old database to magma. I found some
errors in my objects, but there were a critical question on the way you
commit when using a large collection. If I perform a single commit for each
element, it takes too much. With a commit for all the collection an error
raises: 'couldnt serialize more than 10 megabytes'. So I had to use a mixed
solution, a commit for x elements of the collection. I'm uploading about
150000 objects, and grouped by 500 and it worked fine. The question is: Is
there a way to know how space will occupy each object on the serializer, so
I can improve this groups of commits to consume the less possible time?

Another question. I was wandering what would happen if I change the class
definition of my objects when the  objects are alive in the database. I'm
making some test to get the answer but I want to know how Magma is expected
to work if  a) I add an instance variable, b) remove an instance variable,
c) change the hierarchy of the class d) change the class of a collaborator,
etc.

An finally, we need the method #asOrderedCollection to transform a
MagmaCollection. It's not implemented, Object definition is taken. I'm not
sure whether magma uses this implementation (Object >>
#asOrderedCollection). Can we implement this method as a collect in the
normal way?

Thanks in advance.
Norberto Manzanos
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/magma/attachments/20060706/efce3564/attachment.htm


More information about the Magma mailing list