Squeak Socket Primitives

Raab, Andreas Andreas.Raab at disney.com
Wed Nov 10 03:23:21 UTC 1999


Craig,

Thanks for the nice comparison. Here is what I read into it:

> Basically, the VM needs to wait for network events 
> at the host level in order to be efficient, and not 
> doing so causes pointless complication in addition 
> to poor performance. 

True. But as I said before this is an implementation issue not a design
issue.

> On every supported platform, except Macintosh, the VM polls 
> outside the host kernel for network events, with non-blocking sockets. 
> This is inefficient. It also leads to needless complications. 
> For example, on Unix, Ian makes special effort to ensure keyboard 
> interrupts are handled during waits for network events 
> [see aioPollForIO()]. On Win32, Andreas peppers the sources with socket 
> access protection [LOCKSOCKET() and UNLOCKSOCKET()] and 
> "polling cycles" [SetEvent(sqPollEvent), which can cause delays of up to
100ms]. 

True. But again an implementation issue. (Side note: If you are running in
separate threads you better make sure that you lock the structure to work
on, otherwise your system may go gaga. The cost is exactly zero but the
potential problems huge).

> The Macintosh implementation uses a notification service 
> provided by MacTCP instead of polling for network events. 
> But Apple is phasing out MacTCP. There is some confusion 
> as to the precise nature of the sockets interface that 
> future versions of MacOS will provide. However, it seems 
> that it will be a Berkeley-style interface, so Macintosh will 
> be subject to complications such as those mentioned above. 

Again, implementation.

> There is only one Smalltalk semaphore provided for synchronization 
> with all a socket's network events. It's possible to get a 
> writeability signal while waiting for readability, and vice-versa. 
> Also, one cannot wait for both at the same time. It is therefore 
> not possible to correctly implement protocols in which separate 
> Smalltalk Processes read from and write to a socket. The implementation 
> requires that correctly-functioning protocols which use it will 
> read and write sequentially in a single Smalltalk Process, 
> consume all available incoming data before writing, and finish 
> writing before new incoming data arrives. This is okay for simple 
> call-response protocols like POP, but not for more complex ones like IRCP.


Here's a real difference. Two semaphores instead of one.

> Finally, the way a socket's network event synchronization semaphore 
> is used in Smalltalk is inefficient and awkward. Each method in the 
> "waiting" protocol of Socket uses a >>whileTrue: loop to wait for a 
> particular event (inefficient), broken by the signalling of the 
> synchronization semaphore by a socket primitive or by a timeout 
> Delay (awkward). 

Nothing prevents you from writing different wait functions. It is however a
nice feature that we've got Smalltalk control over this stuff. Moving this
down to the VM level is possible (even trivial) but may have some unwanted
effects (how do you interrupt/abort a connection that's in a blocking
send/receive?!) All in all, I prefer the ST control for most applications
except perhaps in the case of running servers (and one could easily think
about integrating the VM level wait behavior on an optional basis).

> The loop test selects for the desired event, effectively 
> acting as a partial hedge against the occurrence of an 
> undesired socket event. 

Again, two semaphores vs. one.

> It would make a lot more sense to simply tell the VM to 
> associate a distinct Smalltalk semaphore with a particular 
> event, wait on the semaphore until the VM signals it, and 
> let the primitives worry about timeouts (timeouts are supported 
> by the Berkeley interface). The whole >>waitTimeoutMSecs: mechanism 
> is weird and unnecessary. 

See comments on timeouts.

> In my implementation, there are two separate host threads per
> socket. One is for connecting and reading, the other is for writing.
> Sockets are used in blocking mode. Instead of polling for socket events,
> each thread blocks as necessary in a loop (when waiting for 
> a request from Smalltalk to wait for an event, and when 
> waiting for the event to occur). Each thread is associated 
> with a unique Smalltalk semaphore. A Smalltalk process which 
> wants to wait for an event simply sends >>wait, just like 
> it would when waiting for protected access to any ordinary Smalltalk
object. 

Again, implementation.

> This is simpler, more efficient, easier to understand, 
> easier to port, and easier to debug than the current 
> implementation. I suspect the current implementation was 
> heavily guided by the bizarreness of MacTCP (and possibly 
> a hesitancy to use MacOS threads-- they may not even have 
> existed when the initial design was conceived). 

And, finally, implementation. I'm leaving out the remaining part because
everything that follows is ST behavior and doesn't relate to a particular
set of primitives. As I said, looking through the above it all seems that
the primary question is how many semaphores we have and if we eventually
want to move timeouts down the VM level. That doesn't give me the impression
that our current primitives are inherently flawed. I will have to check with
your correspondants primitives but I would be *very* amazed if they'd be
extremely different from the current set.

  Andreas





More information about the Squeak-dev mailing list