Mac VM and Open Transport and performance numbers.

John M McIntosh johnmci at smalltalkconsulting.com
Sat Jun 10 20:16:52 UTC 2000


For the curious one of the things I attempted to do was to support a high
performance Squeak based tcp/ip server when I was writing the new software.
This drove some design decisions in the new C code. Once the code was
written, well how to test, since in reality I didn't really have a high
performance tcp/ip server to use, the intent is there, just no code.

To solve this issue I wrote a little application that builds a listen socket
and spins off a socket to handle the incoming request, which picks a
solution from a collection of random choices (think static HTML) and sends
that to the client and then closes the socket, much like many HTTP servers
do today. Multiple Smalltalk processes and shared queues are used to do
this. (I'll ship this application in a day or so).

I was interested in how many connections per second, how big was the backlog
queue, how much data was shipped, how long did servicing each request take,
and did we drop anything.

I ran this application on a PowerBook 500Mhz and used various clients on a
10Mbit ethernet segment to continuously make requests to it.

Ok the numbers: 

I was limited to the number of clients but I served up 48 connections per
second sustained that's about 4.1 millions connects per day. Peak was 112
connections per second, with 21 requests outstanding at anyone point on
average. One of my tests moved 12.59 GB of data, or about 3.815 Mbits a
second with an average of 9,887 bytes per response (A few bytes to 20,000
were the choices). I had some other test results where I used some more
clients and got 58 connections per second (that's 5 million per day) with a
average bit rate of 4,706,028. One case using a 100MBit Lan did over 12MBit
peak. You should be aware that I could browse around in the server image
while this was going on and I didn't have any response time issues.

Service request times.
I took a random sample of 25945 connections across 7 hours and found:
75% completed in 100ms That is the time from the point of accept at the
listen to when we have sent the data and closed the socket.

84.36% completed in 500ms or less
89.63% completed in 1000ms or less
95.51% completed in 2000ms or less
98.05% completed in 5000ms or less
99.16% completed in 8000ms or less
99.65% completed in 10 seconds or less
with 1 at 17 and 1 at 24 seconds.

Please note the client software actually forked off 7 connections per
iteration thru a loop and I think some of the clients were CPU starved and
some buffer flow logic in the code tends to skew to higher numbers, but the
waitForDataUntil was set to 10 seconds, so nearly all the clients were
serviced within one iteration of their read loop.

In a 7 hour run I did not drop any data according to the client logs, and I
had one read failure, one of my Linux VMs locked up! Since I don't have
anything to compare this to I can't say how good it is. Mind it would appear
a midrange Squeak server could saturated a T1 line.

Oh and the interesting thing was this testing pointed out a number of issues
with the external semaphore signaling table which of course is fixed in the
latest updates.


--
===========================================================================
John M. McIntosh <johnmci at smalltalkconsulting.com> 1-800-477-2659
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
===========================================================================
Custom Macintosh programming & various Smalltalk dialects
PGP Key: DSS/Diff/46FC3BE6
Fingerprint=B22F 7D67 92B7 5D52 72D7  E94A EE69 2D21 46FC 3BE6
===========================================================================





More information about the Squeak-dev mailing list