[Vm-dev] The cooked and the raw [RE: Touch/MultiTouch Events]
ken.dickey at whidbey.com
ken.dickey at whidbey.com
Sun Sep 20 15:15:30 UTC 2020
Thanks much for the additional input!
As I noted, I am coming up from zero on this and have many references to
digest.
To clarify, my thought is to have raw inputs packaged by the vm and have
St code which does gesture recognition with initial states based on
screen areas (morphs), parses the events, giving dynamic user feedback
(morph and finger/stylus tip highlighting, finger/stylus trails), and
following the state machines to do the recognition.
I would like declarative gesture-pattern descriptions and a way to
discover roles and bind objects to them so that when a recognition event
is complete, the corresponding method is invoked with the roles as
receiver and arguments.
At this point I am still "drawing on clouds", which is prior to the
"cast in jello" stage.
Given the gestures in common use, I would like a naming and compact
recognition pattern akin to enumerators for collections.
What view of this gives the simplest, most comprehensible explanations?
Still reading and cogitating.
Thanks much for references and clarifying thoughts!
-KenD
More information about the Vm-dev
mailing list