How to optimise memory consumption

Martin Beck martin.beck at hpi.uni-potsdam.de
Tue Jan 23 16:07:30 UTC 2007


Chris Muller schrieb:
> Hi Martin, its hard to say why your image might be growing.  So that we may try to work on the problem from a common perspective, I created three test scripts that attempt to recreate the problem.

> Running both images on my laptop, here is the output from the client:

My Output is essentially the same, although running on a P4 with 3,2GHz...

committing 1000... 0:00:00:10.363
committing 2000... 0:00:00:14.522
committing 3000... 0:00:00:13.064
committing 4000... 0:00:00:13.465
committing 5000... 0:00:00:14.436
committing 6000... 0:00:00:14.197
committing 7000... 0:00:00:11.316
committing 8000... 0:00:00:18.198
committing 9000... 0:00:00:10.854
committing 10000... 0:00:00:17.714
committing 11000... 0:00:00:12.008
committing 12000... 0:00:00:14.227
committing 13000... 0:00:00:15.063
committing 14000... 0:00:00:14.646
committing 15000... 0:00:00:13.657
committing 16000... 0:00:00:16.76
committing 17000... 0:00:00:10.266
committing 18000... 0:00:00:19.322
committing 19000... 0:00:00:10.586
committing 20000... 0:00:00:16.461
committing 21000... 0:00:00:12.833
committing 22000... 0:00:00:14.886
committing 23000... 0:00:00:14.461
committing 24000... 0:00:00:15.305
committing 25000... 0:00:00:12.278

> I also observed the memory consumption of both images.  It remained within about 1MB of their original starting allocatioon through the entire test.

The script works an gives me a memory consumption of about 5MB if I 
repeat the client side code 10 ten times. However
it seems to remain there.

However, in our test image, the server has after running 100 tests 
inserting and deleting data, around 31000 instances of MaHashIndexRecord 
(found with your provided script). Does that give any hint to you? We'll 
have to further dig into some code tomorrow, to provide you with a 
modified version of your scripts so you can test it... :)

Another question: Is it possible to limit the answers of a 
MagmaCollectionReader just like with "limit" in sql? The problem is, 
that I want to have only the first 10 results of a query, but it would 
return about 1500. but if i use MagmaCollectionReader>>asArray: it will 
call lastKnownSize which in turn executes the query on the server side 
(if i am right).


Happy coding,
Martin


More information about the Magma mailing list