Lots of concurrent IO.

Scott A Crosby crosby at qwes.math.cmu.edu
Thu Aug 9 20:51:32 UTC 2001


Hello.. One of my self projects has been to try to create a multiuser
virtual world under squeak. I'm starting small, right now working on a
trivial text-based one.

I'm doing this under 3.1 on linux.

One of the requirements of it is to handle large numbers of concurrent
socket connections and also handle incoming data securily to be immune
from DoS. By large number, I mean 50-500. Thus, busywaiting will be
deadly.

Because I personally feel that concurrent programming is, an incredible
source of complexity and locking overhead, I will be taking an
event-driven programming approach. I wish to have one master queue. All
incoming packets will (when completed) be added to that queue and then the
core dispatcher will deal with each one in turn. I want it to be fairly
similar to the TCL event loop.

I tried searching for some sort of a poll() operation, whereby one process
could monitor many sockets for changes. No luck.

So, I've switched to using an approach similar to that of Java: I've got
two processes/socket. One to read, the other to write. The reader process
blocks waiting for data to occur on the socket, then parses it and adds a
message to the central queue.

My problem:
   I can't get the reader process to block. If I use
  Socket>>waitForDataUntil: (Socket deadlineSecs: 60)

it appears that squeak only checks the socket every 60 seconds.  As a
workaround, I'd have to reduce this to 100ms or lower, thus turning this
into a busywaiting loop. I don't want this.

In the definition of this function, it is blocking on:
     self readSemaphore waitTimeoutMSecs: .....

Presumabely, the signal is never being raised and its waiting for the
timeout.

Is there a fix so that the runtime will signal this semaphore when new
data does arrive? Or, is there a different alternative that will work
better?

Scott.





More information about the Squeak-dev mailing list