[Seaside] Re: Concurrent requests from multiple sessions

David Carlos Manuelda stormbyte at gmail.com
Fri Mar 6 10:49:39 UTC 2015


wilwarin wrote:

> Hi all,
> 
> Currently we are developing a web application in Seaside, and it is
> necessary for us to work in VAST. The application runs as a Windows
> service, and for a few days we are facing the following issue:
> 
> - At one moment there are two users (let's call them 'A' and 'B') logged
> in in the application, so we have two different sessions.
> - 'A' requests a page with a long list of objects obtained from DB2, so it
> takes a number of seconds to get the results.
> - Less than one second after 'A''s request, 'B' requests another page, for
> instance an easy static page.
> - For the time the 'A''s request is handled, the 'B''s browser window
> freezes and waits for those number of seconds mentioned above.
> 
> We searched really a lot, but still the results are not what we would
> expect. This issue makes as confused, because in the future the
> application should serve hundreds of users with very similar combinations
> of requests.
> 
> We didn't know, where our problem lies, so we tried a similar test with a
> single page with a difficult calculation inside. Then we tried the same in
> Pharo to exclude the problem in VAST. Both with the same results.
> 
> Is there anything we are missing? What should we do to achieve a parallel
> (or kinda better) processing of requests?
> 
> Thank you very much for your responses.
> 
> Ondrej
> 
> 
> 
> --
> View this message in context:
> http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929.html
> Sent from the Seaside General mailing list archive at Nabble.com.
Since pharo is green threaded (that means it has only 1 thread in CPU), if 
you do a very expensive operation it may become unresponsive until it 
finishes.
Try with 9999999999 factorial. and you will see. Due to the previous, even 
if you do [ 9999999999 factorial ] fork. you will still experiment some kind 
of lag.

Being that said, and also discusses in another thread upon my name, I would 
suggest you to run several images with a load balancer (nginx for example), 
my suggestion is to use 1 image per virtual core to achieve the maximum 
performance possible.

That way, when the user expensive operation is taking place, other users may 
be directed to another image which could be more idle.



More information about the seaside mailing list