[squeak-dev] Re: Rewriting an Input/EventSensor code

Jecel Assumpcao Jr jecel at merlintec.com
Sun Mar 22 21:33:23 UTC 2009


Igor Stasenko wrote on Sun, 22 Mar 2009 20:42:00 +0200

> Same for gestures - you can emit a very simple/basic events, and then
> combine them to make something more complex/different.

It would be great if we could have a whole eco-system of "filters" that
would receive some types of announcements and would generate new ones.
This could even be efficient if there is some way to check if there is
anybody subscribed to a given event (then the filters generating these
announcements could suspend their activity, unsubscribing from their own
inputs in turn, until someone interested showed up).

Some application might just want to get text independently of whether it
was typed on the keyboard, recognized from speech or drawn with a pen. A
game might want to see raw keyboard events to deal with keyDown and
keyUp separately, though it would be better for there to be a "game
filter" that handled this so the application could use they keyboard,
multi-touch gestures or a fancy joystick equally well.

About the general idea, I agree that the interface between the VM and
the image should be changed as little as possible (not at all would be
ideal, but it unlikely with iPhone and such) and that the current
in-image APIs should remain available for full compatibility - only new
applications would take full advantage of the announcements.

-- Jecel




More information about the Squeak-dev mailing list