[Seaside] Re: Seaside and Connection Reset by Peer problems
Sven Van Caekenberghe
sven at stfx.eu
Tue Feb 17 18:14:49 UTC 2015
You are benchmarking 'session creation', not 'session using', think about it. Correctly benchmarking Seaside is very hard to do because it is state full by definition.
You need to increase #listenBacklogSize up from 32 if you want more concurrency.
Normal performance for 1 Seaside image is 50-100 full dynamic req/s
Performance for a pure Zinc HTTP server can be 10x as much, for example, serving a single byte reusing connections, but will drop when the work/size per request is increased.
Load balancing is the answer, as well as off loading static resource serving.
> On 17 Feb 2015, at 19:52, David Carlos Manuelda <stormbyte at gmail.com> wrote:
> Sebastian Sastre wrote:
>> To be safe, if you want to go beyond 10 or 15 concurrent connections you
>> put additional Pharo image workers so you scale your application
>> horizontally. It makes good use of CPU too.
>> There is a point in which all stacks have to do it, so yes, I think you
>> are testing the borders of one Pharo worker.
>> PS: when you use more than one, you have to design your app in a "more
>> stateless way" and use sticky sessions
> Thanks for your response.
> Yes, in previous tests, I made an array with 8 pharo images with nginx as
> load balancer with sticky sessions, and, of course it responded to ~1k
> concurrent petitions without problems, but of course, it still failed beyond
> some point, so that is why I decided to run tests on a single one.
> Isn't there any way to change this behavior, for example by letting a higher
> timeout or something else in order not to have those connections rejected so
> soon? Because in my opinion, less than 300 petition per second on the same
> image is not such a high load for it to be starting to drop connections.
> seaside mailing list
> seaside at lists.squeakfoundation.org
More information about the seaside