My first Magma experience (Now including GOODS)
Chris Muller
chris at funkyobjects.org
Sun Apr 3 20:52:06 UTC 2005
Daniel, you have repeated a classic performance-comparison between relational
and object databases. Using a ultra-simple model and a
"crunch-through-a-big-table" style test, all the benefits granted by an ODBMS
are minimized while performance benefits of slamming data, not objects, into a
table are emphasized.
With ODBMS's "transparency" is granted in terms of not having to think-in or
map-to tables, but there is the need to account for the database when it comes
to high-volume processing. One thing is traded for another, it's not a free
lunch.
> What I tried to do here was simply to iterate through all the
> FHKC834Entry objects already stored in the database and updated and
> committed one attribute every 100th object. It took 29 minutes on the
> same computer. This is even worse timing as the other one. I guess I
> could live with a slow bulk load process, but random updates to a large
> collection shouldn't take this long.
This particular test and the way it was run exemplies what I would think to be
very close to worst-possible-case performance with Magma. You're materializing
nearly the entire dataset, object-by-object, surely building up a huge readSet,
and then making periodic commits without stubbing (or using WriteBarrier on the
GOODS test).
ODBMS's give you more transparency than relationals, so they have to do a
little more. High-volume processing requires profiling. I didn't see that,
nor consideration of the various performance-tuning tools documented at
http://minnow.cc.gatech.edu/squeak/2985 in your post.
Still, while I'm sure what you've done can be improved I don't know whether
your performance requirements can be met given the apparent volume you are
dealing with. Magma is definitely in an experimental state. It's performance
will improve with time but more conventional technologies might serve you
better for your high-volume project, as Avi suggested.
One way to help determine this may be to measure the "best-case" performance
and see if even that is acceptable. If not, you can stop right there, no point
in going any further. For Magma, check out MagmaBenchmarker (part of the Magma
tester package).
For your convenience, I post my results from a recent run on my 1.3GHz laptop.
- Chris
Name: Magma tester-cmm.90
Author: cmm
Time: 24 March 2005, 10:44:35 pm
UUID: 6ece4a6c-242f-294e-9a95-2915c421bab6
Ancestors: Magma tester-cmm.89
- Introduction of MagmaArray.
- Performance improvements.
'The date is 24 March 2005 10:35:08 pm
Hardware Details:
computer : IBM R40 laptop
cpu : Pentium-M
speed : 1.3GHz
memory : 768MB
disk : internal HD
OS Details:
osVersion : NT
platformName : Win32
platformSubtype : IX86
vmVersion : Squeak3.7 of ''4 September 2004'' [latest update: #5989]
imageName : C:\Development\Chris\Development\Squeak\current3.7.image
Image Details:
version : Squeak3.7
lastUpdate : 5989
Code Package Details (from Monticello):
Name: Magma tester-cmm.87
Author: cmm
Time: 24 March 2005, 10:29:47 pm
UUID: 9b67b624-a90e-fe47-ae14-fcdca9dbd02c
Ancestors: Magma tester-cmm.86
MagmaSession Details:
isLocal : false
Benchmarker Details:
thousands : 1000
Benchmarks:
---
readTests
peakRefreshRate : 202.5189924030388 per second.
singleObjectRead : 226.3547290541892 per second.
oneThousandElementArrayRead : 106.1575369852059 per second.
oneThousandElementArrayOfObjectsRead : 10.5619768832204 per second.
oneMillionObjectPointersRead : 8 seconds.
oneThousandLevelsDeepRead : 5.97252637865817 per second.
---
writeTests
peakCommitRate : 17.18281718281718 per second.
oneThousandElementArrayCommit : 15.63737133808393 per second.
a1001BufferCommit : 7.48326112642773 per second.
aOneMillionBufferCommit : 53 seconds.
---
magmaArrayTests
getMagmaArraySize : 97.3790124668695 per second.
updateThousandsOfMagmaArrayElements : 157.5054967019788 per second.
---
magmaCollectionTests
addThousandsOfObjectsTenAtATime : 18 seconds.
addThousandsOfObjectsAtOnce : 12 seconds.'
More information about the Squeak-dev
mailing list
|