[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Florin Mateoc florin.mateoc at gmail.com
Sun Jan 22 05:47:42 UTC 2023


I think one obvious element lacking in ChatGPT as an intelligent being is a
model of the world. And sure, without such a model, it can never grow
beyond mimicry.
And, sure again, such a model will not emerge from parameter adjustments
that happen as the neural net gets trained by ingesting more and more data,
whether curated/labelled or not. But I don't think that matters.

I think it would be enough if the neural net is complemented with an
imperative program that explicitly sets out to build a(n also) persistent
model of the world (potentially with a reflection of the AI entity itself
included as an element of this model), connected to (ideally built from)
the net parameters - that would be the hard part - so that it can link the
two and use them both. The neural net would be the fast,
reactive/unthinking response (call it instinct, stereotypes, prejudice :)),
the model would be the deep/reflective part.
The model, as opposed to the net, should be adjustable via interactions, in
addition to via changes in the neural net parameters. The AI could have
both a long-term model as well as a session (short term memory) model, to
allow for better interactions.
At that point, when it can refer to itself as situated in the world, and it
manages to avoid most egregious inconsistencies, does it really matter
whether or not it has an epiphany and says "wait, that's me in the mirror!"?
It still wouldn't be truly creative (although it would write some terrific
essays :)), but it would fulfill hands-down the Turing test

Florin

On Sat, Jan 21, 2023 at 11:19 AM Eliot Miranda <eliot.miranda at gmail.com>
wrote:

> Hi all,
>
>     thinking about what is different and unreliable about LLMs such as
> ChatGPT yesterday I thought of something structural in the machine learning
> approach that renders the enterprise fundamentally unsound.  I am
> interested in your reactions to my argument.
>
> The thought was prompted by a tweet from a David Monlander:
>
> “ I just refused a job at #OpenAI
> <https://mobile.twitter.com/hashtag/OpenAI?src=hashtag_click>. The job
> would consist in working 40 hours a week solving python puzzles, explaining
> my reasoning through extensive commentary, in such a way that the machine
> can, by imitation, learn how to reason. ChatGPT is way less independent
> than people think”
>
> https://mobile.twitter.com/davemonlander/status/1612802240582135809
>
>
> In successful human learning at a secondary and tertiary, and possibly
> even primary level, the student matures yo a meta level understanding of
> learning.  At first the desire to please the teacher, be they parent or
> professional, motivates the student. But soon enough they realise that the
> information learned is useful and/or interesting independent of the
> teacher, and set about learning as an end in itself, using the teacher as a
> guide rather than a goal. As the student matures so their meta level
> strategies grow in efficacy. The student learns how to learn, and works at
> improving their ability to learn, adding to their arsenal systems of
> thought all the way from mnemonics to systems thinking, materialism,
> causality, physics and philosophy, as well as communications forms (such as
> Socratic dialog), ontologies, and epistemologies.
>
> In machine learning however, no matter how sophisticated the architecture
> of the training scheme, the goal of the neural network is always mimicry.
> Responses that correctly mimic the training data (be it, as implied by the
> tweet above, provided by a trainer who explains their reasoning, or mere
> traversal of some literary corpus) as rewarded, are reinforced. Those that
> do not are deprecated. The fundamental approach is to teach the network to
> mimic. It remains stuck at an infantile level.
>
> The things the LLM lacks, and are extremely difficult, and unlikely, if
> not impossible, to arise from such training are
> - a theory of self and other, as actors that engage in learning adopting
> rôles such as student, teacher, questioner, answerer, etc. such
> distinctions are fundamental to be able to consider the sociology of
> learning, who is a fellow student, how yo relate one’s performance to
> others, etc
> - a theory of the material, ecological and social worlds, governed by
> physical and causal mechanisms, inhabited by many other life forms, and
> human peers
> - theories of society and societal rôles, such as implied by the
> progression from student to apprentice to master, etc
>
> Without these underlying epistemological ideas, any learning remains first
> level, devoid of any deeper understanding, fundamentally syntactical in
> nature, interested only in the degree to which the training data is
> mimicked. Fundamentally machine learning, LLMs and ChatGPT remain
> “teacher’s pets”, and, if they are able to develop meta theories of
> learning with which to better succeed at their assigned task, their
> sophistication will be in how better to “satisfy their teacher”, rather
> than towards theories of leaning and knowledge themselves.
>
> Eliot,
> ___,,,^..^,,,___
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20230122/3c759ae4/attachment.html>


More information about the Squeak-dev mailing list