On interfaces..

Joshua "Schwa" Gargus schwa at cc.gatech.edu
Fri May 25 19:18:38 UTC 2001


On Fri, May 25, 2001 at 10:41:56AM -0400, Kevin Fisher wrote:
> On Fri, May 25, 2001 at 09:59:41AM -0400, Joshua Schwa Gargus wrote:
> 
> [snipp]
> 
> > > Is there a better interface we can
> > > create for Squeak where we can do everything we do with text in other
> > > ways, with different input devices and methods?  
> > 
> > It's one thing to examine interfaces with a critical eye, and another
> > to discard things that work well.  People are very good at text.  To
> > take an extreme example, look at a mathematical proof.  It very concisely
> > conveys an unambiguous relationship between mathematical entities.  It
> > does this very well (especially in terms of conciseness).  If you try to
> > say the same thing in plain English, it is much more verbose.  If you
> > try to say it in icons, good luck.
> > 
> 
> Well, perhaps I should have been a bit clearer...I don't think we should
> throw away the text interface.  The main gist was, are there other
> ways to do what we do with text?
> 
> For example, could we create a generalized system for creating/manipulating/
> using objects, of which the text interface we all know so well is just
> one way of doing it?  Then we could attach all kinds of different
> interfaces to Squeak...pen based, voice based, mouse based, etc.  This
> would be in contrast to taking the afore mentioned interfaces and making
> them work on top of just a mouse/text based interface.

So when you say text, you're really talking about text _input_, and 
specifically text input with a keyboard.  I can agree that we could probably
find ways around using keyboards.  Two that come to mine are character
recognition and speech.  However, I don't think it's as easy to get away
from text _output_.  In general, I'd rather see text on the screen than
hear it spoken to me.

> 
> > Programming is not so different from a mathematical proof, although it
> > is at a lower (less abstract) level than most proofs.  The aim is to
> > specify unambiguous behavior in as consise and understandable form as
> > possible.
> > 
> > Basically, my position is that while we should always have our eyes open
> > for new ways to augment our thinking processes, we know FAR too little to
> > begin to come up with a concrete plan to replace text in our user 
> > interfaces.
> > 
> > On the other hand, if you're willing to change the 'everything' in your
> > question to 'some of the things', then I would be much less pessimistic
> > about the near-term prospects of your endeavour.
> 
> Yes, I think this is fair.  "Some of the things" Is more realistic.
> 
> > 
> > > It seems that the trend
> > > today is to force-fit everything into a common metaphor..on Windows,
> > > everything must be a 'document view'.  We force all kinds of data to
> > > conform to a single metaphor...and this, in the end, forces us to 
> > > change our data.
> > > 
> > > In general computers still follow the old office clerk metaphor...
> > > ..desktops, trash bins, keyboards, files.  Any 'new' input interfaces
> > > are always turned into mouse emulators and paper simulators.
> > 
> > This is, unforunately, true.
> 
> Of course, at one time, the 'mouse' was a concept unheard of....and now
> it's an integral part of the whole office clerk metaphor.
> 
> > 
> > > WinCE is a great example of this...they shrunk the desktop metaphor down
> > > to a palmtop/stylus device without even asking if that metaphor even
> > > made SENSE on such a device.   I don't know about anyone else, but
> > > doing stuff like press-and-hold to get the right-click menu is pretty
> > > counter-intuitive to me.
> > 
> > I can't think of too many computer interfaces that are 'intuitive'.  They
> > all have to be learned.  
> > 
> > Genie uses basically the same trick.  If you put the pen down in a text
> > window without moving it for some time (200 milliseconds?  I can't remmeber),
> > then it stops trying to recognize a gesture and allows you to select text.
> > 
> > Intuitive?  Of course not.  
> > 
> > Effective once you learn it?  Not too bad at all.
> 
> 
> True, you can learn it.  I still don't find it terribly effective in my
> opinion.  

You don't find what too effective?  Genie?

> As an example, a couple years back my boss and I got our hands
> on our first batch of WinCE devices.  We were messing with the explorer
> when we found something we wanted to delete....it took us a full half
> hour of frustrations just to discover the 'click-and-hold' right click 
> menu.
> 
> (Yes, the half-hour of pain could be avoided perhaps by RTFM...:)
> 

Well, presumably you could have a pattern recognizer watch newbies and
detect when someone is spinning their wheels like this.  I'm not really
famaliar with the required technology, though.

> > 
> > I think that context-sensitive menus are a good idea (although the WinCE
> > implementation is clearly not the last word).  Assuming that we agree on
> > this point, how else should should this functionality be accessed on a
> > WinCE device?  You certainly can't highlight an item, and then move your
> > stylus up to a 'contextual menu' button that would bring up the menu for
> > the highlighted button; this would be absurd from a Fitt's law point of 
> > view.
> 
> True; I'm certainly only at the 'thought' stage of things. :)  I 
> don't have any idea what the alternatives might be, but I'm certainly
> giving it more thought these days.

Keep it up!  You've already spawned a thread that has others thinking about
it too!

> > 
> > > 
> > > As an example, on a palmtop device (no keyboard, just a stylus) I think
> > > it would be great to be able to "program" it in a graphical manner..for
> > > example I connect my "address book" object to my "IR port" object and
> > > enable the sending of my address book over the infrared emitter.
> > 
> > It is good that you put "program" in quotes.  It is programming, but not
> > base-level programming.  This would be reasonable to do on a PDA.
> 
> Yes, it wouldn't be programming per-se, but rather creating a flexible
> object environment.  PDA's tend to be rather rigid application launchers,
> rather than providing full environments (especially the Palm).

This is definitely the way to go, especially now that PDA processors are
powerful enough to handle Squeak.  It's nice to be able to have Squeak
on both your PDA and desktop, because you can write tools to modify your
PDA environment on your desktop, and then download the functionality into
your PDA.

> 
> > 
> > However, if your system doesn't have this functionality, and you want to
> > provide it (ie: write code so that data objects dropped on transport 
> > objects know how to do something sensible), then you will almost certainly
> > run into problems programming it on a PDA.  There is just too much 
> > detailed information, and too little screen space.
> > 
> > I'm not saying that this will never be possible.  Our PDAs will get higher-
> > resolution screens and have enough computing horsepower to support
> > zoomable user interfaces.  Visual programming languages will learn to
> > make more efficient use of space.  Etc, etc.  However, it is out of reach
> > for the forseeable future.
> > 
> > Joshua
> > 
> 
> I find the whole subject to be quite interesting..there's a lot of attention
> being focused on PDA's and wearables these days, and this has naturally
> brought attention to pen-based input.  I think (I hope) that this will
> encourage more "thinking different" about interfaces in general.
>
> Anyway, I suppose I was just musing aloud...I hear a lot of complaining
> about today's desktops and interfaces, and it seems to me that Squeak is
> the ideal place to experiment and explore new ideas.

I couldn't agree more.

> 
> (On the subject of wearables, Steve Mann at UofT has some really interesting
> and bizarre stuff in the works...no mice, pens or office clerks!)

Steve Mann is cool.  I can't wait until I can get a micro video camera 
that fits in the frame of my regular glasses.

Joshua





More information about the Squeak-dev mailing list