Back to AI work.

Alan Grimes alangrimes at starpower.net
Tue Aug 20 07:21:19 UTC 2002


Ned Konz wrote:
> > with this. I am not yet worrying about constructing the "user"
> > interface, but rather I need to achieve a very odd capability. That
> > is when I begin to write my AI classes, I will need them to be _IN
> > FRONT OF_ the user interface. I need to be able to perform a pixel
> > by pixel analysis of whatever is presented on the screen. The stuff
> > _BEHIND_ the user interface will be extremely mundane asside from
> > the security restrictions.

> I don't understand why, in general, you need or want to do a
> pixel-by-pixel analysis when every change to Squeak's pixels is
> caused by the execution of some higher-level action.

> Wouldn't it be easier adnd more useful  to capture higher-level
> actions?

-- and from another post: 
> Given this aim, I still agree with using Morphs as the input
> primitives of interest.  Using Forms would basically require computer
> vision algorithms to try to make sense of the patterns made as by
> pixels reacting to user input.  If computer vision is your passion,
> then go for it, but otherwise Morphs provide a shortcut around this
> daunting approach.

om

I call the type of system you propose a "Conceptual user interface"
(CUI). Ten years from now I will be working on neural interface devices
based on that idea. The reason I am not attempting it now is very
simple: I don't know how to design it. I will need to gain an
understanding of the functioning of a concept-processing system (an AI)
before I can begin to link it to computer programs. Such a system should
enable "concept programming" that will allow the direct transfer of
conceptual structures into code elements... -- the ultimate in
programming.

However, this is the year 2002 and I have much more modest ambitions. I
seek to develop a number of ideas I have in cybernetics and AI. Because
a key goal is to establish communication with an AI entity such that it
can be educated and used, I can't make its intelligence too alien right
off the bat. I need to eliminate as many differences as possible between
the AI and the human user. 

Ideally I would create an android and basically implement what has been
reverse-engineered from the brain and then try to fill in the gaps. I
don't have that kind of funding... I hardly have any funds at all.
(urk!)
If I am to make this an internet open source project, the cost of
participation must be very low/non-existant. So therefore I must create
a common medium through which communication can develop. It is important
that the AI's experience of the situation be as close to that of the
user as possible. -- the AI will be highly simplified but have similar
functionality.

One of the major problems of saying "40px70p rectangle at 90 at 50" is that
the AI will have absolutly no idea what a rectangle is. It must gain
this understanding through experamentation and observation. The biggest
failing of the cyc project, that aims to achieve common sense, is that
the AI's system starts off with an 'atom' size equal to the english
language. That is, it has little or no hope of ever moving beyond/under
the english language to a deeper understanding of things. It is hard to
argue that the pixel, or a representation thereof, is not the minimal
atom of video output. 

The same idea is at the heart of my AI design. The AI will run on
nothing beyond strings of numbers. The ideas is to do away with all
symbols and let the system be programed by the environment. The input
and output streams form the semantic basis of the system. Everything
else in the system is an abstraction of either. These abstractions are
what other AI researchers call patterns, though I strongly discourage
the use of that term in this context, Especially in the context of
perception.

The AI starts out as pure or nearly pure tabula rasa. As it explores its
world, it gains knowlege. I project that the space complexity of this
system will grow logarithmicly with respect to things learned. However,
I am uncertain about the time complexity. Given the difficulties
encountered during the development of logic programming -- on which this
is based to a great extent --, the time complexity may turn out to be
quite bad. =( 

It is clear that repeating a previously trained behavior can occour in
roughly linear time with respect to the output, similar to the time of
FORTH, however generating useful behavior abstractions may proove
difficult. 

It is unclear how well it will do at math... It will likley have
performance charactoristics similar to humans in that regard. --
occasionally coming up with things akin to 2+2=5. As the project
progresses, the development of a cybernetic interface (an interface
between a cybernetic system and conventional computer code) becomes a
bigger priority, so that the mind can better leverage the underlying
computer. 

-- 
Linux has more source code than my brain.
http://users.rcn.com/alangrimes/



More information about the Squeak-dev mailing list