[Seaside-dev] Handling of session expiration
dale.henrichs at gemstone.com
Thu Nov 8 17:44:49 UTC 2007
Esteban A. Maringolo wrote:
>On Nov 7, 2007 2:36 PM, Dale Henrichs <dale.henrichs at gemstone.com> wrote:
>>In GemStone, since we are oriented towards multiple vms sharing session
>>state, we've moved the session expiration logic into a separate vm (our
>>maintenance vm) that expires sessions about once a minute. By persisting
>>session state we also avoid the load on the in-memory vm garbage
>>collector, of course we just move the load to the repository garbage
>>collector, but that only needs to be run on the order of hours to keep
>Let me see if I understand... when you say "persisting session state"
>you mean doing it explicitly or just because of the persistent nature
We have chosen to persist the session state (using the persistent nature
of GemStone), so that we can distribute the handling of requests across
multiple vms without using session affinity...
>Do you have any number of the peak usage before running the GC?
>I mean... how much garbage does Seaside usage generate? Or how long do
>that garbage lives?
In some of the scaling tests that I've recently run, I found that when
running at 15 requests/second (we created a new session for each
request). We were persisting 6k objects/second and about 600k
bytes/second (we've found over time that the average object is 100
bytes). That works out to about 400 objects created per request. We used
a session expiry of 10 minutes so that works out to about 3.6M objects
(360M bytes) that will need to be kept around steady state ... In
GemStone we have an epoch garbage collector that was firing every half
hour and a full mark for collect running every two hours...
>>I understand that folks use Seaside in production have a couple of
>>different schemes for managing session state (non-Gemstone instances),
>>first of which is to spread the session state load across multiple vms
>>(using session affinity) so that no one vm's garbage colector is slammed
>>to the wall.
>We're considering that option too, to scale the more linearly as
>possible, but we want to be able to handle the most number of
>concurrent session per image as possible. Dolphin proved to be very
>stable under heavy load, with almost the maximum number of objects it
>can handle. I just discovered that it's not equally stable with a huge
>number of processes.
Is it the process scheduler that is causing the headache?
>>Secondly, they leave the session expiry at a low value (say
>>ten minutes) and then use a browser-based thread that keeps the session
>>alive as long as the page is visible in the browser.
>How is this browser based thread?
I think that Ramon Leon mentioned this in a Seaside post, but I believe
that the trick was to use java script in the Browser to ping the seaside
server once every couple of minutes to keep the session alive, so that
the users didn't have a bad experience with sessions expiring out from
under their noses when they get distracted by something else going on
... It is definitely an application-based trick.
>>If you do look at WaRegistry>>shouldCollectHandlers you'll notice that
>>sessions are expired only when the registry grows .... that logic could
>>be adjusted to expire sessions when the vm is under memory pressure as
>>well, so that aged sessions can be expired after a period of intense
>That's another option. But unless a better approach appears, we'll
>have a background process doing the cleanup. Anyway, I'm trying to not
>have to do that cleanup et al.
It is interesting that you're not seeing the sessions expiring ...
perhaps it has something to do with the way that dictionaries are sized?
I know that the default mechanism for running the expiration thread
waits for the dictionary to grow by 10 entries ...
More information about the seaside-dev