[Vm-dev] The cooked and the raw [RE: Touch/MultiTouch Events]

Tom Beckmann tomjonabc at gmail.com
Sat Sep 19 20:06:54 UTC 2020


Hi Ken,

just commenting on the availability of gestures - do you know of any
windowing systems that provide high level touch gestures? The systems I'm
familiar with all defer this to a UI library.

libinput, which you linked to in an earlier thread, also explains why they
cannot, generally, provide high-level gestures on absolute touch devices
[1]. They do provide gestures for touchpads, however. These could also be
of interest, but I think that's a different event type altogether that
should not mingle with touch events.
Similarly, OS X appears to provide high-level gesture for
touchpads/trackpads [2], but for their touchscreen APIs they appear to
defer to UI libraries again [3] (these are just my takeaways from 5min of
googling).

Best,
Tom

[1]
https://wayland.freedesktop.org/libinput/doc/latest/gestures.html#touchscreen-gestures
[2]
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/EventOverview/HandlingTouchEvents/HandlingTouchEvents.html
[3]
https://developer.apple.com/documentation/uikit/touches_presses_and_gestures/handling_uikit_gestures

On Sat, Sep 19, 2020 at 9:37 PM <ken.dickey at whidbey.com> wrote:

>
> My intuition is that some window systems will give cooked/composite
> gesture events, where with others we will need optional Smalltalk code
> or a plugin to recognize and compose gesture events.
>
> One thing that has bothered me for some time is the difficulty in
> explaining how users interact with input events and the amount of
> required cooperation agreed to between components. [E.g. drag 'n drop].
>
> I think some of this is elegant ("I want her/him & she/he wants me") but
> what I am looking for is a way to express interest in pattern roles.
>
> I want to specify and recognize gesture patterns and object roles within
> each pattern.
>
> So match (composed) gesture to pattern within a sensitive area to get:
>    open/close
>    drag 'n drop (draggable=source, droppable=target; object-for-drag,
> object-for-drop)
>    expand/collapse (maximize/minimize)
>    grow/shrink (pinch, press+drag)
>    rescale (out/in)
>    rotate
>    stretch/adjust
>    reposition
>    scroll (swipe)
>    select (tap, double-tap, select+tap)
>
> The "same" gesture could map differently depending on the "sensitive
> area", e.g. open/close vs maximize/minimize; grow/shrink vs rescale vs
> stretch vs reposition.
>
> Sensitive areas could compose as with mouse sensitivity.  Sensitivity &
> role(s) given to any morph.
>
> Redo pluggable buttons/menus/.. in new pattern.
>
> I know this is both a code and a cognitive change, but I think easier to
> explain = more comprehensible.  I think it could be more compactly
> expressive.
>
> -KenD
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20200919/3821b93a/attachment.html>


More information about the Vm-dev mailing list