On interfaces..

Joshua "Schwa" Gargus schwa at cc.gatech.edu
Fri May 25 13:59:41 UTC 2001


Hi Kevin,

On Fri, May 25, 2001 at 09:08:15AM -0400, Kevin Fisher wrote:
> 
> On the subject of pen-based interfaces, I've had some general thoughts
> and questions about interfaces overall.
> 
> We've been discussing making the Squeak environment more friendly towards
> stylus-based input.  I think it is safe to say that the standard interface
> needs some massaging to make it work better with a pen.

That is safe to say.

> The current Squeak interface is very text-based, even under Morphic.
> When we create new objects, we do it textually, ie:
> 
> foo := SomeObject new.
> 
> We then access and manipulate the object textually as well:
> 
> foo someMessage: 'testing'.
> 
> In Morphic, I have a few other options available...I can open an inspector
> window on the object which gives me...a container for more text.
> 
> Now, Morphs are a bit different...these can be inspected and manipulated
> non-textually with menus and halos to a certain extent.
> 
> However, in general, working _in_ Squeak -- creating objects, instantiating
> objects, combining objects -- is still done textually.  You might say
> that the primary interface to the continuum of objects is still through
> the keyboard.  The Workspace, the Browser, the inspectors....all are 
> containers for text.
> 
> Now MY question...can we do better?  

Well, I don't think we can get away from text altogether.  It depends on the
level of control you want over the system.  If you want control at all 
levels (think garbage collection), then it will be tough to get away from
text (see below)

> Is there a better interface we can
> create for Squeak where we can do everything we do with text in other
> ways, with different input devices and methods?  

It's one thing to examine interfaces with a critical eye, and another
to discard things that work well.  People are very good at text.  To
take an extreme example, look at a mathematical proof.  It very concisely
conveys an unambiguous relationship between mathematical entities.  It
does this very well (especially in terms of conciseness).  If you try to
say the same thing in plain English, it is much more verbose.  If you
try to say it in icons, good luck.

Programming is not so different from a mathematical proof, although it
is at a lower (less abstract) level than most proofs.  The aim is to
specify unambiguous behavior in as consise and understandable form as
possible.

Basically, my position is that while we should always have our eyes open
for new ways to augment our thinking processes, we know FAR too little to
begin to come up with a concrete plan to replace text in our user 
interfaces.

On the other hand, if you're willing to change the 'everything' in your
question to 'some of the things', then I would be much less pessimistic
about the near-term prospects of your endeavour.

> It seems that the trend
> today is to force-fit everything into a common metaphor..on Windows,
> everything must be a 'document view'.  We force all kinds of data to
> conform to a single metaphor...and this, in the end, forces us to 
> change our data.
> 
> In general computers still follow the old office clerk metaphor...
> ..desktops, trash bins, keyboards, files.  Any 'new' input interfaces
> are always turned into mouse emulators and paper simulators.

This is, unforunately, true.

> WinCE is a great example of this...they shrunk the desktop metaphor down
> to a palmtop/stylus device without even asking if that metaphor even
> made SENSE on such a device.   I don't know about anyone else, but
> doing stuff like press-and-hold to get the right-click menu is pretty
> counter-intuitive to me.

I can't think of too many computer interfaces that are 'intuitive'.  They
all have to be learned.  

Genie uses basically the same trick.  If you put the pen down in a text
window without moving it for some time (200 milliseconds?  I can't remmeber),
then it stops trying to recognize a gesture and allows you to select text.

Intuitive?  Of course not.  

Effective once you learn it?  Not too bad at all.

I think that context-sensitive menus are a good idea (although the WinCE
implementation is clearly not the last word).  Assuming that we agree on
this point, how else should should this functionality be accessed on a
WinCE device?  You certainly can't highlight an item, and then move your
stylus up to a 'contextual menu' button that would bring up the menu for
the highlighted button; this would be absurd from a Fitt's law point of 
view.

> 
> As an example, on a palmtop device (no keyboard, just a stylus) I think
> it would be great to be able to "program" it in a graphical manner..for
> example I connect my "address book" object to my "IR port" object and
> enable the sending of my address book over the infrared emitter.

It is good that you put "program" in quotes.  It is programming, but not
base-level programming.  This would be reasonable to do on a PDA.

However, if your system doesn't have this functionality, and you want to
provide it (ie: write code so that data objects dropped on transport 
objects know how to do something sensible), then you will almost certainly
run into problems programming it on a PDA.  There is just too much 
detailed information, and too little screen space.

I'm not saying that this will never be possible.  Our PDAs will get higher-
resolution screens and have enough computing horsepower to support
zoomable user interfaces.  Visual programming languages will learn to
make more efficient use of space.  Etc, etc.  However, it is out of reach
for the forseeable future.

Joshua


> (Now I'd like to say that all of this emerged from my fiddling with
> palmtop environments...but much of the credit goes to Ted Nelson.  
> If I've learned anything from Ted's writings it's that we should constantly
> challenge the interface metaphors we take for granted.  You may not
> agree with him, but he _does_ make you think twice about things.)









More information about the Squeak-dev mailing list