[Seaside] Apache Load Balancing with Seaside

jtuchel at objektfabrik.de jtuchel at objektfabrik.de
Sat Nov 12 09:01:11 UTC 2016


Hi there,

we're finally at a stage where we need load balancing is not only needed 
but also needs to be a bit more fine-grained.

So far, we've used Apache's mod_proxy_balancer with an byrequests 
balancing scheme and the simply Set Cookie, but this is not really 
distributing load in an even manner, because it simply does a round 
robin based on requests. Teh way I seem to understand this, this just 
sends every nth request to another backend, unless the sticky session 
cookie is there.

This seems to have multiple problems in the context of Seaside:

1. each page render consits of dozens of requests, and at least 2 in 
case of the usual redirect/render cycle

2. So request != new seaside session and therefor not a good base for 
distribting load

3. It seems the Cokie as marker for stickyness is browser wide. Every 
new session started in teh same browser gets directed to the same 
seaside image

So this starts to cause problems in our scenario because there is always 
one image (the first one in the http.conf) that handles most of the 
load, no matter how many images we add.

I think I understand that URL encoding for the load balancer would fit 
much better for Seaside. What's even better is the fact that Seaside 
already uses the _s parameter to identify a session.

So at first sight, I think it might be a good idea to use the _s 
parameter provided by Seaside as the URL parameter that is also used for 
session stickyness.

This brings up a few questions:

* Is this a good idea?
* Has anybody tried?
* How to configure this?

The most important question, however, is this: would this be any better 
wrt workload distribution? I mean, how could we make sure the very first 
request gets redirected to a new image? My fear is that in the end this 
will still have the same problem: the initial sessions will still be 
mostly created by the same image all of the time. The only difference 
might finally just be the fact that we use URL parameters instead of 
Cookies and spend nights testing and stuff and end up with the same 
problems...

Before you suggest using squid or the like, let's think about the basic 
problem: does squid do things differently? What mechanism there is so 
much better than counting requests, measuring bytes transferrred or such?

So how do people solve this problem? Any ideas, hints, experiences are 
greatly appreciated



Joachim





-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          mailto:jtuchel at objektfabrik.de
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1



More information about the seaside mailing list