jpython anyone?

russell.allen at firebirdmedia.com russell.allen at firebirdmedia.com
Fri Nov 24 04:27:46 UTC 2000


Hi Henrik,

The area that I'm working on is legal AI, which probably is somewhat
different to general AI, but does (I think) have some overlap.

Some of the earliest attempts to model the process of legal reasoning
did follow the path of talking to the computer in 'Math-like languages'
- the Imperial College of Science and Technology in London translated
the British Nationality Act into PROLOG,  for example.  

In the field of law it very quickly became apparent that this approach
was getting nowhere.  Leaving aside more complex issues of the
underlying legal reasoning process (which isn't pure logic) the main
problem was that of syntax.

The substantial difference between the syntax of law and maths meant
that the costs involved with creating the rulebases and maintaining them
were prohibitive, and that even if you did do this, you quickly ran into
the problem that the exact phrasing of the legal materials was vital. 
You can't paraphrase law without causing potential problems.

As a result, much of what I consider to be successful application of AI
to law were based upon systems designed to allow the use of the original
structure and phrasing of the law.  This includes large scale commercial
applications, and (incidently) some test applications built using ROSIE.

I would argue that this need for a system that allows the use of the
original structure and phrasing of the law outweighs the downside of
confusing the users and developers with a syntax which looks more
intuitive then it is.

On the flipside, using Smalltalk has greatly influenced my attitude
towards moving beyond simple text editors and compilers to supportive
development environments.  Hopefully these environments can help users
through the immensely difficult task of attempting to think like a
computer.

Cheers,

Russell

Henrik Gedenryd <Henrik.Gedenryd at lucs.lu.se> wrote:
> 
> > I had expected to see more in this direction over the years. A famous
> > (in its time and place) precursor with this approach was ROSIE, a
> > very nice expert systems language done at RAND in the late 70s and
> > early 80s. Have you seen it?
> 
> There was some by now classical research done on natural language interfaces
> in the early 80s. Put simply, the general conclusion was that it is a bad
> idea to make a computer mimic humans, because users will then attribute
> human capabilities to it that it doesn't have. (Think ELIZA.) This is a
> basic social principle that we need in order to interact with others (we
> can't read their minds). When we then attribute too much to the computer,
> the breakdown that follows is very harsh, much more costly than what is
> gained by familiarity when it works.
> 
> It is much better to present the computer so that we will make reasonable
> attributions. Math-like languages make us regard it as a math machine, which
> is not that unreasonable.
>
> But of course, this didn't affect AI researchers much. The real reason we
> haven't heard about progress in AI is that there hasn't been any to speak
> of. The small successes in restricted, well-chosen domains have never
> generalized, be it natural language processing, computer vision, symbolic
> induction, or whatever. The term "AI winter" was coined in the Lisp
> community in the late eighties. Today it seems that Bill Gates is about the
> only one who still thinks that AI will be "the next big breakthrough". For
> the last 40 years, AI news stories have begun like this: "Soon they will be
> here--computers that xxxx". "Don't hold your breath" is one of my favorite
> American expressions.
> 
> > The trick in most of these systems is not how hard it is for the
> > system to recognize a restricted English syntax, but how hard is it
> > for a random human to learn to write in the restricted syntax.
> 
> Would another way to put this be that the supposed advantage, being easier
> to learn than more standard programming syntaxes, is not there? "Restricted"
> here means that it looks like it's ordinary language but it isn't. It
> doesn't let people use ordinary language.
> 
> In fact, a main feature of natural language is not the syntax--but the fact
> that most of the time we needn't be very precise at all with the syntax to
> be understandable. _This_ is what a "natural" syntax suggests to a user. And
> this is not what you want a user/programmer to belive, right?
> 
> Syntax is a big deal in programming languages but not real ones; this is a
> point that CS researches miss all the time. In fact, eg. Chomsky's theories
> apply much better to programming languages than real ones. With all the
> research on syntax, Markov chains (statistics about what words occur close
> to each other) are still better predictors of word order in "real" natural
> language than any theory of syntax.
> 
> The hard problem in programming is figuring out what to do--you have to
> understand the domain of available means (the language capabilities) as well
> as well as the problem you are solving. Addressing syntax leaves this alone,
> it only concerns how you express the solution once you have figured it out.
> Programming languages are just meant to be means for expressing solutions.
> What you need to address is the process of reaching solutions. The
> interactive environment of Smalltalk is the most important advance that has
> been made so far: a compiler merely processes the solution, whereas an
> interactive environment supports you while working out a solution.
> 
> So to condense my point: we need to shift the focus from the form of the
> expressed solution, to the process which produces it (and tools to support
> this). But the nature of such cognitive tools has only begun to be addressed
> in the last 5-10 years.
> 
> 
> Oops, there I did that rant again.
> 
> Henrik

----------------------------------------
Russell Allen

russell.allen at firebirdmedia.com

----------------------------------------





More information about the Squeak-dev mailing list