[Seaside] Apache frontend for Squeak, mod_scgi ?

Avi Bryant avi at beta4.com
Sat Sep 27 00:28:45 CEST 2003


> Apache hello25k.html	550 rps   totalCPU 27%
> Komanche hello25k.html  8.5 rps   totalCPU 53%
> A2-mod_python args.py   155 rps   totalCPU 12%
> A2-mod_python mptest.py	348 rps   totalCPU 14%
> Komanche dynamic***1    8.3 rps   totalCPU 48%
> Komanche dynamic***2    29  rps   totalCPU 62%

FWIW: you don't mention what kind of machine you're on.  I'm on an Athlon
1.4Ghz, and for a "Hello World" response, I see these kinds of numbers:

Apache (warming up): ~300 rps
Apache (with file cached): ~1500 rps
Comanche 6: ~300 rps
KomServices: ~1500rps

The last is using the KomServices package to build a dummy server that
always spits out a "Hello World" HTTP response, no matter what you send it
- it reads the request from the socket but doesn't do anything with it.
This is meant as the theoretical optimal Squeak server - SCGI etc will
have that as an (unrealistic) upper bound.

I'm not going to draw many conclusions here because I think these kind of
micro-benchmarks are pretty pointless - they don't tell you *anything*
about how a real application will perform.  I'll just point out that
the difference between 300rps and 1500rps (which coincidentally happen to
be the only two numbers that appeared in my timings) is 2.7ms per request.
Those 3ms are, essentially, the overhead that Comanche itself imposes
(whether over Apache or over some hypothetical super-optimized SCGI
server).

So before you end up doing too much more benchmarking, think about how
much difference those 3ms will or won't make in the big picture of your
application.  It's not an insignificant number - it *could* make a
difference - but it wouldn't take, say, a very complex database query to
overshadow that.

If you're really performance-obsessed, you might also want to do some
benchmarks of Squeak and Python for the other 99% of the work your
application is doing, when it's not pushing bytes across sockets.  It at
least used to be that Python had a very, very slow interpreter, and it
wouldn't surprise me if Squeak pulled way ahead in a complex dynamic page,
even given mod_python's apparent 3ms headstart.

But personally I would recommend ditching the benchmarks and starting to
write code.  All you have to do to make things get faster is wait around
for Moore's law, but the app won't write itself.

Cheers,
Avi



More information about the Seaside mailing list