[Seaside] Concurrency question

goran at krampe.se goran at krampe.se
Mon Feb 18 18:40:43 UTC 2008


Hi!

Jeffrey Straszheim <jstraszheim at comcast.net> wrote:
> James Foster wrote:
> > In GemStone each VM is also single-threaded but you can have multiple 
> > VMs attached to the same object space. Here the typical approach is to 
> > use Apache to round-robin requests to separate VMs so that blocking is 
> > less of an issue. See http://seaside.gemstone.com and 
> > http://gemstonesoup.wordpress.com for more info.
> That sounds a little more complex than I want to deal with for my first 
> Seaside app (baby steps for me).  What I want to ensure is that while a 
> long running thread is handling one HTTP request, other arriving HTTP 
> requests will still be honored -- they are not in Rails, for instance, 
> which *requires* you to run multiple instances to have any hope of low 
> lag for clients (and even then, if all of your instances are busy, 
> everyone else waits).

Let me try to give a straight and simple answer for *Squeak*, hopefully
factually correct.

Seaside in Squeak normally runs using KomHttpServer. KomHttpServer
spawns the handling of each HTTP request in a Squeak Process of its own
(green thread). This means that yes, Seaside is fully concurrent. The
low level Socket code in Squeak is aynchronous, and while it could be
improved (I think Dan Shafer has written quite a lot of details about
that on the Squeak Swiki) it works quite good IMHO.

BUT... if your response handling code uses say the current ODBC via FFI
to make calls that take a lot of time - then those calls will block the
whole VM. But unless you do that - then yes, it is concurrent.

Now, as a sidenote - setting up multiple images using HAProxy was quite
simple, I just did that for a customer.

And going even more off topic - given Igor's new Hydra VM we could
probably create a more scalable SMP-capable solution. :)

regards, Göran


More information about the seaside mailing list