[Seaside] Squeak async i/o performrance, was Re: HTTP Performance

Stephen Pair stephen at pairhome.net
Thu Nov 20 14:36:01 CET 2003

Adjusting the backlog size has almost no effect...keep in mind that:

 requests/second ~= socket connections / second

They are not equivalent because the keep-alive feature of Http 1.1 
allows multiple request to be sent on a single socket.  However, 
adjusting the backlog and turning off keep-alive still doesn't seem to 
affect rps very much.  Also, I've recently seen higher rps than 50, so I 
don't think that 50 rps is somehow magical.  However, as best I can 
tell, there is still a large amount of idle time while a benchmark is 
running against a comanche server.  Outbound socket connections suffer 
as well.   Profiling about 2 seconds worth of ab benchmarking shows that 
around 70% of the time is spent waiting on semaphores...this tells me 
that either those semaphores are not getting signaled in a timely 
fashion, or there is an excessive latency between the time a semaphore 
is signaled and when a waiting process wakes up.  These are semaphores 
being signaled from the VM in OS threads that deal with socket 
activity.  Could it help if we explicitly yield the processor to the 
interpreter thread everytime an external semaphore is signaled?  Or, 
perhaps we could make better decisions about which squeak processes need 
to run after processing the external semaphores that are marked for 

- Stephen

Bruce ONeel wrote:

>Avi Bryant <avi at beta4.com> wrote:
>>Like Stephen, I'm intrigued by your 50rps figure for Comanche - that's 
>>the number that I always seem to come up with as well.  There *must* be 
>>something throttling that, and we need to get to the bottom of it.
>I poked at this today and I don't quite get the math to work.
>I looked at unix Squeak 3.4-1 so that might distort things.
>The basis of the clock in interp.c seems to be in milisecs.  I'm
>getting idea from comments and the definition of ioiMsecs in 
>sqXWindow.c.  So, given that:
>checkForInterrupts (interp.c) is called approx every 1000 byte codes
>executed.  If it hasn't been 3 milisecs since the last call this
>1000 is scalled up.   If it's been more it is scalled down.  We seem
>to be trying to check for interrupts every 3 milisecs.  This is
>controlled by interruptChecksEveryNms.
>Once this happens if it has been 500 milisecs (this seems high)
>after the last time we called checkForInterrupts we call
>This comes from the bits in checkForInterrupts()
>if (now >= nextPollTick) {
>	ioProcessEvents();
>	nextPollTick = now + 500;
>ioProcessEvents (sqXWindow.c)  calls aioPoll.
>aioPoll (aio.c) calls select and then calls the aio handler for 
>each descriptor which has pending i/o waiting.
>sqSocketListenOnPortBacklogSize has set up the socket with a listen
>and then enabled an aio handler.
>TcpListener has created a socket with a backlog default of 10.
>This implies that every 500 milisecs (ie, 2 times per sec) we can 
>get 10 accepts for 20 connections per sec when processing
>is happening.
>On top of that events are polled when we call 
>relinquishProcessorForMicroseconds which is called by the idle
>process.  This would be called with a wait of 1 milisec 
>(1000 microsecs).  I'm not sure how often this is called and it
>depends on the load in Squeak as well.
>I guess then that the additional 30 connections per sec
>would be accepted during the idle process running.
>It would seem then that a higher backlog in TcpListener might
>give higher connections per second.  If someone has a test bed
>set up which shows the 50 connections per sec, what happens if
>you doIt
>TcpListener backLogSize: 1000
>do things work?  Do you get more connections pe sec?  Do things
>I'm obviously not sure the above is correct.  With luck someone
>with a chance of knowing will speak up :-)
>Seaside mailing list
>Seaside at lists.squeakfoundation.org

More information about the Seaside mailing list