Hi Martin, its hard to say why your image might be growing. So that we may try to work on the problem from a common perspective, I created three test scripts that attempt to recreate the problem.
"Create script" | path | path _ 'c:\temp\memLeakCheck'. mc _ MagmaCollection new addIndex: (MaUUIDIndex attribute: #key); yourself. MagmaRepositoryController create: path root: (Dictionary new at: 'mc' put: mc; yourself)
"start server in one image" | path | path _ 'c:\temp\memLeakCheck'. MagmaServerConsole new open: path ; processOn: 1010 ; inspect
"load from client in another image" | mc session | session _ MagmaSession hostAddress: #(127 0 0 1) asByteArray port: 1010. session connectAs: 'loader'. mc _ session root at: 'mc'. session begin. 1 to: 25000 do: [ : n | mc add: UUID new -> nil. n \ 1000 = 0 ifTrue: [ | timeToRun | Transcript cr; show: 'committing ', n printString, '... '. timeToRun _ [session commitAndBegin] durationToRun. Transcript show: timeToRun printString ] ]. session commitAndBegin ; disconnect
Running both images on my laptop, here is the output from the client:
committing 1000... 0:00:00:12.616 committing 2000... 0:00:00:17.148 committing 3000... 0:00:00:14.84 committing 4000... 0:00:00:16.814 committing 5000... 0:00:00:15.773 committing 6000... 0:00:00:13.942 committing 7000... 0:00:00:17.519 committing 8000... 0:00:00:15.909 committing 9000... 0:00:00:14.148 committing 10000... 0:00:00:19.148 committing 11000... 0:00:00:13.307 committing 12000... 0:00:00:17.941 committing 13000... 0:00:00:15.093 committing 14000... 0:00:00:15.174 committing 15000... 0:00:00:19.248 committing 16000... 0:00:00:14.536 committing 17000... 0:00:00:17.181 committing 18000... 0:00:00:16.068 committing 19000... 0:00:00:14.807 committing 20000... 0:00:00:20.409 committing 21000... 0:00:00:14.279 committing 22000... 0:00:00:20.054 committing 23000... 0:00:00:16.961 committing 24000... 0:00:00:14.754 committing 25000... 0:00:00:17.994
I also observed the memory consumption of both images. It remained within about 1MB of their original starting allocatioon through the entire test.
I, too, am running on Windows XP, albeit with just 768MB RAM.
Do the above scripts work for you too? Or, can you modify the scripts in some way that exposes the apparent memory leak?
I will be happy to fix whatever bugs we can find through this process. Or, if you have more detailed information about your tests I will do my best to interpret what the problem may be.
Regards, Chris
PS - On the server, you may also run this script to see what "Ma" classes may be staying referenced and therefore causing image growth.
((MaObject allSubclasses collect: [ : each | each -> each instanceCount ]) asSortedCollection: [ : a : b | a value > b value ])
Also, there is a Swiki page about memory utilization..
----- Original Message ---- From: Martin Beck martin.beck@hpi.uni-potsdam.de To: magma@lists.squeakfoundation.org Sent: Friday, January 19, 2007 6:19:19 AM Subject: How to optimise memory consumption
Hey, we're running around 100 tests which work with the database (e.g. create the whole database for testing, etc.). Even if we don't use our session pooling and clean up all magmasessions on the client afterwards, the server image consumes around 250mb while it started with 30mb....
Any hints to keep memory consumption down? Running on WindowsXP with 2gb ram...
Regards, Martin _______________________________________________ Magma mailing list Magma@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/magma
Chris Muller schrieb:
Hi Martin, its hard to say why your image might be growing. So that we may try to work on the problem from a common perspective, I created three test scripts that attempt to recreate the problem.
Running both images on my laptop, here is the output from the client:
My Output is essentially the same, although running on a P4 with 3,2GHz...
committing 1000... 0:00:00:10.363 committing 2000... 0:00:00:14.522 committing 3000... 0:00:00:13.064 committing 4000... 0:00:00:13.465 committing 5000... 0:00:00:14.436 committing 6000... 0:00:00:14.197 committing 7000... 0:00:00:11.316 committing 8000... 0:00:00:18.198 committing 9000... 0:00:00:10.854 committing 10000... 0:00:00:17.714 committing 11000... 0:00:00:12.008 committing 12000... 0:00:00:14.227 committing 13000... 0:00:00:15.063 committing 14000... 0:00:00:14.646 committing 15000... 0:00:00:13.657 committing 16000... 0:00:00:16.76 committing 17000... 0:00:00:10.266 committing 18000... 0:00:00:19.322 committing 19000... 0:00:00:10.586 committing 20000... 0:00:00:16.461 committing 21000... 0:00:00:12.833 committing 22000... 0:00:00:14.886 committing 23000... 0:00:00:14.461 committing 24000... 0:00:00:15.305 committing 25000... 0:00:00:12.278
I also observed the memory consumption of both images. It remained within about 1MB of their original starting allocatioon through the entire test.
The script works an gives me a memory consumption of about 5MB if I repeat the client side code 10 ten times. However it seems to remain there.
However, in our test image, the server has after running 100 tests inserting and deleting data, around 31000 instances of MaHashIndexRecord (found with your provided script). Does that give any hint to you? We'll have to further dig into some code tomorrow, to provide you with a modified version of your scripts so you can test it... :)
Another question: Is it possible to limit the answers of a MagmaCollectionReader just like with "limit" in sql? The problem is, that I want to have only the first 10 results of a query, but it would return about 1500. but if i use MagmaCollectionReader>>asArray: it will call lastKnownSize which in turn executes the query on the server side (if i am right).
Happy coding, Martin
Hi Martin,
However, in our test image, the server has after running 100 tests inserting and deleting data, around 31000 instances of MaHashIndexRecord (found with your provided script). Does that give any hint to you?
That's possibly high, but it depends on how many MagmaCollections and indexes you have and, in particular, if you are using a large key like 256 bits. And also whether a garbage collection has occurred (or am I to assume that number is *post* "Smalltalk garbageCollect"?). Instances of this class are managed solely by MaHashIndex, they cache a path of MaHashIndexRecords from the last access through the tree of records. There is a very detailed detailed description of MaHashIndex, and even a detailed PDF of its format and operation workings on the Swiki pages. This code has been rock-solid for years, I am doubtful there is a memory leak here..
We'll have to further dig into some code tomorrow, to provide you with a modified version of your scripts so you can test it... :)
Great, if I can see it, I can probably find the cause very quickly..
Another question: Is it possible to limit the answers of a MagmaCollectionReader just like with "limit" in sql?
myReader pageSize: 10
But I would do at least 50..
The problem is, that I want to have only the first 10 results of a query, but it would return about 1500.
Yeah, I wish I knew a way to co-integrate where clauses with terms that are not part of the available indices. It requires faulting the object to the client anyway, so I leave it to the Magma user.
but if i use MagmaCollectionReader>>asArray: .. it will call lastKnownSize which in turn executes the query on the server side (if i am right).
Yes, if the query uses "or" it will require a full enumeration of the result-set to determine the size. You just want the first 10 and don't care about the size. Yeah, that would be nice, I'll add that to my list to investigate.
I hope r39 is a little better for you, you can now have MagmaSessions referenced in your domain..
Regards, Chris
magma@lists.squeakfoundation.org