J J wrote:
What I want to do is from a higher priority process to take total control of the scheduling and try out an event driven style scheduler.
Tweak contains an example for this. It's ScriptScheduler is a custom scheduler which guarantees that each script (process) under its control is scheduled with the semantics that I defined for it (check out the Croquet SDK to see it in action).
Then if the process yields the CPU in some way it goes up in priority. If it uses the entire quantum then it is dropped in priority. But for this to work I would need my new scheduler class to get involved any time there was some kind of context switch. It can't happen that when the wait is over my process comes alive to find a different process is current then what he expects.
You'd have to do something similar to Tweak's scheduler then. You can't beat the VM into not switching processes or putting your process in control of all the other processes. What you can do however is to instruct *your* processes such that they adhere to your rules within your context.
Well, I see people on the list always saying to use Comanche as little as possible because it is too slow, but I wonder why this is the case. Yaws (the Erlang web server, which spawns one new process per connection) beats Apache quite handily, I wonder why Comanche couldn't do the same.
I'd think because of I/O speed. It would be trivial to fire off a couple thousand processes in Squeak (I have done that). However, webservers are often not limited by how quickly they switch between processes but rather how effectively they can pump I/O (files) to the network (which, if done right, doesn't require any user-level code in the middle at all).
I read on the Wiki that Comanche also forks a new process for each new connection, but from what I could tell it looks like the fork is at the same priority of the server. So what this would mean is if 30 clients connect right after each other, 29 fast clients and one that requires a lot of processing then the clients will connect and be serviced until the 1 long one hits, then the rest have to wait for him to finish.
I haven't looked at the code but I somewhat doubt that. It'd be trivial to fix (see below).
With the event driven scheduler described above each thread would be started in the highest priority with the lowest quantum (since the server spends the vast majority of his life sleeping he will quickly be promoted to top normal priority). So the first quick processes connect, make their request and yield before using their quantum. The long process would run out his entire 10 or 20ms quantum and get demoted, allowing all the rest of the process to come in and be processed before he runs again.
Now the long process would actually take longer in this case, but more different clients are serviced in the same time making the server at least appear more responsive.
Yes, prioritizing the quick requests is certainly a good idea for something like a web-server. Though, a simpler version would be to run a high priority process that simply yields every 50msecs or so to shuffle the worker processes a little. It wouldn't be quite as effective as your event scheduler but -having done a custom scheduler- it is *infinitely* easier to implement ;-)
Cheers, - Andreas