[Vm-dev] The cooked and the raw [RE: Touch/MultiTouch Events]

ken.dickey at whidbey.com ken.dickey at whidbey.com
Sat Sep 19 19:26:20 UTC 2020


My intuition is that some window systems will give cooked/composite 
gesture events, where with others we will need optional Smalltalk code 
or a plugin to recognize and compose gesture events.

One thing that has bothered me for some time is the difficulty in 
explaining how users interact with input events and the amount of 
required cooperation agreed to between components. [E.g. drag 'n drop].

I think some of this is elegant ("I want her/him & she/he wants me") but 
what I am looking for is a way to express interest in pattern roles.

I want to specify and recognize gesture patterns and object roles within 
each pattern.

So match (composed) gesture to pattern within a sensitive area to get:
   open/close
   drag 'n drop (draggable=source, droppable=target; object-for-drag, 
object-for-drop)
   expand/collapse (maximize/minimize)
   grow/shrink (pinch, press+drag)
   rescale (out/in)
   rotate
   stretch/adjust
   reposition
   scroll (swipe)
   select (tap, double-tap, select+tap)

The "same" gesture could map differently depending on the "sensitive 
area", e.g. open/close vs maximize/minimize; grow/shrink vs rescale vs 
stretch vs reposition.

Sensitive areas could compose as with mouse sensitivity.  Sensitivity & 
role(s) given to any morph.

Redo pluggable buttons/menus/.. in new pattern.

I know this is both a code and a cognitive change, but I think easier to 
explain = more comprehensible.  I think it could be more compactly 
expressive.

-KenD


More information about the Vm-dev mailing list