[squeak-dev] Dynabook and interactive content
hilaire at drgeo.eu
Thu Jan 17 11:08:47 UTC 2019
Hi Ron, thanks for your feedbacks
In 2005, when I ported Dr. Geo from C++ to Squeak, I investigated shapes
gesture recognition with the help of the Genie package. As you wrote,
you can recognize drawn shapes as circle and line, move the canvas in
one or another direction. Gesture was not based on neural network layer,
I don't think it can recognize shapes as shown with autodraw.
However in the context of Dr. Geo, most of the time you will not
instantiate a circle out of nothing, so it will mean to recognize
additional gestures to designate the center (a free point, a point on a
line, or on two lines) and another point on the circle, or a value as
the radius, or a segment as radius base, etc.
All perfectly doable, it will require the user to learn the gestures and
to have the ad-hoc pen based interface, though. I found these two
situations not likely to happen: user are mainly casual user (kids at
school) operating with a mouse driven computer. Another issue I found:
the gestures are implicit, the user has no way to discover the gestures
without going through a tutorial. In an iconic UI, you convey explicit
information with tooltips and toolbar messages when the user specifies
he wants to draw a circle. Ok, you can go through this process with
gesture based UI, with a kind of hybrid UI. Anyway I did not go very far
although the idea was appealing.
Now you wrote about gesture to create code or to retrieve a template
from a corpus; for example drawing a map will retrieve a code template
to create interactive geographic map, drawing a time line will retrieve
a template code to create an interactive temporal/chronological line,
drawing a triangle will retrieve a template code to create an
interactive geometric sketch, drawing a speaker/wave will retrieve a
template code to create a sound recorder/player and so on.
Was it what you meant?
Le 12/01/2019 à 21:27, Ron Teitelbaum a écrit :
> Thanks for the post. Very interesting reading. There is a third
> option between dense GUI and code editing.
> You mentioned an assistant but I think there is the potential to have
> an advanced agent that builds interactive diagrams based on user
> input. As an example drawing a large cross builds a
> cartesian coordinate. Pulling arrows changes the scale, drawing a
> circle creates a circle, pulling the edge scales the circle, drawing a
> line staps to grid. Adding a Point creates a point. Typing a letter
> labels the point.
> Having this interaction create code that can be edited and maybe also
> show similar templates available would also be very useful. As we get
> better at recognizing user intention  using those techniques to
> enhance user computer interaction should enable us to design much
> easier interfaces for teachers.
> 1. https://www.autodraw.com/ something like this. Draw an image and
> select a match from the top to change your image.
More information about the Squeak-dev