[Vm-dev] The cooked and the raw [RE: Touch/MultiTouch Events]

Phil B pbpublist at gmail.com
Sat Sep 19 22:12:57 UTC 2020


It's been a while, but when I was doing iOS and Android development, you
typically got the raw events and sent them off to a handler depending on
what you needed them to do.  That's why I believe it should be in the
image, not in the VM.  It's all very context sensitive: whether a single
finger dragging across the screen represents a swipe, scroll, drag or
something else depends on context (not just events that came before and/or
after the current event, but also the GUI object being manipulated, any
interactions between GUI objects as well as the application using it/them)
The VM simply doesn't have enough context to make an informed, or even
reasonable, guess as to what a sequence of touch events mean at a higher
level.  The image does only in the sense that it has a set of use cases it
anticipates for a limited set of widgets.

For example, if I'm implementing a drawing application I may be very
interested in the raw events to determine if the user is using a stylus or
a finger (perhaps you want a stylus to represent a fine-tipped brush while
the finger represents blobby finger painting etc.  Does the user want
multiple touches to represent higher level gestures or simply multiple
simultaneous painting brushes? etc. etc.)  The gestures in a drawing
application are likely to be different, and more nuanced, than those for a
general purpose GUI.  Touch-based 3D apps are another example where you can
pretty much throw out the rule book.  Same with touch-based games.  It may
not even be specific to the application: let's say I wanted to have a
transparent overlay cover the entire display that intercepts all touch
events and, if the touch appears to be from a stylus (or some other means
of relaying higher precision touches), route the events to a handwriting
recognition system which then has to pass the recognized string to the
widget underneath where the text was written.  Yes, you can pretend that
touches are mouse events with a handful of gestures... if you don't mind
restricting yourself to very limited, lowest-common-denominator mobile UIs.

I would agree that the image could and should provide some (optional)
default behavior(s) to 'cook'/synthesize these events into higher level
events that get passed to Morphs (or whatever other UI framework) that
wants them.  This would probably work very well for many use cases.   Just
implement it in such a way that it's easy to bypass / override so that at a
widget/application/world/image level they can say 'no, just give me the raw
events and I'll figure it out' or 'here, use my handler instead'.  This
also leaves the door open to someone else coming along and saying 'here's a
better way to abstract this' without reinventing the (mouse)wheel with N
buttons.  Perhaps morphs could have a gestureHandler that, if present,
takes care of things and optionally dispatches #swipeGestureEvent...,
#dragGestureEvent... etc. for you.  If a given morph doesn't provide a
handler, use the default handler at the world level that deals with a
majority of 'typical' use cases.  Just thinking out loud.

I can say from experience that trying to derive uncooked events (as I did
for mouse wheel events in Cuis) is problematic both from an implementation
and performance standpoint... and the code is ugly.  Separating overcooked
pasta is more fun.  At the same time, dealing with raw events (as I tried
for a period of time on Android before I wised up) for a 'typical' UI isn't
normally the way you want to go either.  So yes, you often want a handler
to do this for you... but it needs to be easy to override/replace/bypass
for when you hit the unusual use cases which are more common than you might
imagine for touch.  These aren't one-size-fits-all solutions... your
application's need for cooked touch events may be very different from
mine.  To me, this screams 'do it in the image!' (optionally with a plugin
for performance... down the road... maybe.[1])

[1]  I think you're going to find that getting 64-bit ARM JIT support, some
sort of process level multi-core support and taking advantage of the GPU
are far more important if you want your image and application to feel like
it's from this century on mobile devices.  I've been trying to use
Squeak/Cuis on mobile devices since the CogDroid days... it's not been a
good experience due to these issues. (utilizing the GPU helps, but that
alone often isn't sufficient)

On Sat, Sep 19, 2020 at 3:37 PM <ken.dickey at whidbey.com> wrote:

>
> My intuition is that some window systems will give cooked/composite
> gesture events, where with others we will need optional Smalltalk code
> or a plugin to recognize and compose gesture events.
>
> One thing that has bothered me for some time is the difficulty in
> explaining how users interact with input events and the amount of
> required cooperation agreed to between components. [E.g. drag 'n drop].
>
> I think some of this is elegant ("I want her/him & she/he wants me") but
> what I am looking for is a way to express interest in pattern roles.
>
> I want to specify and recognize gesture patterns and object roles within
> each pattern.
>
> So match (composed) gesture to pattern within a sensitive area to get:
>    open/close
>    drag 'n drop (draggable=source, droppable=target; object-for-drag,
> object-for-drop)
>    expand/collapse (maximize/minimize)
>    grow/shrink (pinch, press+drag)
>    rescale (out/in)
>    rotate
>    stretch/adjust
>    reposition
>    scroll (swipe)
>    select (tap, double-tap, select+tap)
>
> The "same" gesture could map differently depending on the "sensitive
> area", e.g. open/close vs maximize/minimize; grow/shrink vs rescale vs
> stretch vs reposition.
>
> Sensitive areas could compose as with mouse sensitivity.  Sensitivity &
> role(s) given to any morph.
>
> Redo pluggable buttons/menus/.. in new pattern.
>
> I know this is both a code and a cognitive change, but I think easier to
> explain = more comprehensible.  I think it could be more compactly
> expressive.
>
> -KenD
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20200919/5c9f2631/attachment-0001.html>


More information about the Vm-dev mailing list