Back to AI work.

Joshua 'Schwa' Gargus schwa at cc.gatech.edu
Mon Aug 19 04:51:07 UTC 2002


On Mon, Aug 19, 2002 at 12:21:00AM -0700, Alan Grimes wrote:
> > On Sunday 18 August 2002 01:05 pm, Alan Grimes wrote:
> 
> > Depending on what you want your users to be able to manipulate, you
> > may well find that separate Morphs make more sense. If you just want
> > to draw something on a big background, you can subclass PasteUpMorph
> > and override its drawOn: method (and make a new Project subclass that
> > uses one of these). Look at the implementors of drawOn:.
> 
> k; /me will look into these.
> 
> > Form isn't, itself, a UI object. It's just a representation of some
> > bitmap that can be drawn (it does no user interaction). There is
> > ImageMorph, which is a Morph with its own Form. These may make more
> > sense to use in your UI. You can receive and respond to user input
> > events (mouse, keyboard), you can implement animation or other
> > periodic behavior, and you can re-position the image within the
> > world.
> 
> I fear that I have not succeded in conveying what I am trying to do with
> this. I am not yet worrying about constructing the "user" interface, but
> rather I need to achieve a very odd capability. That is when I begin to
> write my AI classes, I will need them to be _IN FRONT OF_ the user
> interface. I need to be able to perform a pixel by pixel analysis of
> whatever is presented on the screen. The stuff _BEHIND_ the user
> interface will be extremely mundane asside from the security
> restrictions. 
> 
> The software that I am writing today will not be, in any direct way,
> driven by the AI but rather USED. I know that I am trying to catch a
> whale on my first day fishing but it takes something that big to hold my
> interest. ;)

So right now, you're interested in creating a framework that will act
as your system's senses, gathering the raw input that will be synthesized
into a user model.  Some other part of the system could then decide how
to react to the user's actions.

Cool!

Given this aim, I still agree with using Morphs as the input
primitives of interest.  Using Forms would basically require computer
vision algorithms to try to make sense of the patterns made as by
pixels reacting to user input.  If computer vision is your passion,
then go for it, but otherwise Morphs provide a shortcut around this
daunting approach.

Morphs have associated with them many bits of information that make
sense to a person, such as their position and colors.  Text-containing
morphs may contain sensible Spanish sentences.  Accessing these
properties is easy by asking sensible questions of these morphs.
Without any processing at all, you can note such things as "the user
picked up the blue star and dropped it on the sketch".  Since morphs
can also be named by the users, the previous quote might end with
"... dropped it on the sketch named Evening Sky".  Sounds like paydirt
to me!

This is a fun idea.  I'm looking forward to hearing more of your thoughts.

Joshua


> To try to capture where I am in my design better, I threw togeather this
> diagram. I hope that viewing this will not be inconvenient as I had to
> copy it over via flopy from my leenooks machine. (damnit, why the hell
> do I have to umount in the early 2000's!!!)...
> 
> The best way of conceptualizing what I am trying to do is probably by
> saying:
> 
> 	"I am trying to write a user emulator and to test that emulator I need
> to produce a user interface that can act as a medium between the user
> and the user emulator in development." 
> 
> 
> -- 
> Linux has more source code than my brain.
> http://users.rcn.com/alangrimes/




More information about the Squeak-dev mailing list