[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Phil B pbpublist at gmail.com
Sat Jan 21 17:18:26 UTC 2023


LLMs, like all neural nets, learn by rote. (said 'learning' is typically
lossy, by design)  Massive repetition so that they can produce the desired
output for a given input.  The results are not so much unsound as the
results are often being oversold. (who can blame them given the massive
amounts of funding they're raking in as a result of the hype?)

The symbolic school of thought has largely been pushed aside for the
connectionist school of thought given the results the latter has produced
over the past 15 years or so.  The priority of today's successful AI
researcher is to deliver results, via engineering and brute force, rather
than achieving any real insight into how to build systems that are actually
intelligent.   That is likely to continue until the scaling up of current
approaches runs out of gas either in terms of results or economic
feasibility.

As long as one doesn't fall for the marketing spin on the results and sees
them for what they are, the stuff being produced is very, very good and
quite useful... but it's not intelligent and unlikely to be anytime soon.
Think supercharged ELIZA rather than a HAL 9000.

On Sat, Jan 21, 2023 at 11:19 AM Eliot Miranda <eliot.miranda at gmail.com>
wrote:

> Hi all,
>
>     thinking about what is different and unreliable about LLMs such as
> ChatGPT yesterday I thought of something structural in the machine learning
> approach that renders the enterprise fundamentally unsound.  I am
> interested in your reactions to my argument.
>
> The thought was prompted by a tweet from a David Monlander:
>
> “ I just refused a job at #OpenAI
> <https://mobile.twitter.com/hashtag/OpenAI?src=hashtag_click>. The job
> would consist in working 40 hours a week solving python puzzles, explaining
> my reasoning through extensive commentary, in such a way that the machine
> can, by imitation, learn how to reason. ChatGPT is way less independent
> than people think”
>
> https://mobile.twitter.com/davemonlander/status/1612802240582135809
>
>
> In successful human learning at a secondary and tertiary, and possibly
> even primary level, the student matures yo a meta level understanding of
> learning.  At first the desire to please the teacher, be they parent or
> professional, motivates the student. But soon enough they realise that the
> information learned is useful and/or interesting independent of the
> teacher, and set about learning as an end in itself, using the teacher as a
> guide rather than a goal. As the student matures so their meta level
> strategies grow in efficacy. The student learns how to learn, and works at
> improving their ability to learn, adding to their arsenal systems of
> thought all the way from mnemonics to systems thinking, materialism,
> causality, physics and philosophy, as well as communications forms (such as
> Socratic dialog), ontologies, and epistemologies.
>
> In machine learning however, no matter how sophisticated the architecture
> of the training scheme, the goal of the neural network is always mimicry.
> Responses that correctly mimic the training data (be it, as implied by the
> tweet above, provided by a trainer who explains their reasoning, or mere
> traversal of some literary corpus) as rewarded, are reinforced. Those that
> do not are deprecated. The fundamental approach is to teach the network to
> mimic. It remains stuck at an infantile level.
>
> The things the LLM lacks, and are extremely difficult, and unlikely, if
> not impossible, to arise from such training are
> - a theory of self and other, as actors that engage in learning adopting
> rôles such as student, teacher, questioner, answerer, etc. such
> distinctions are fundamental to be able to consider the sociology of
> learning, who is a fellow student, how yo relate one’s performance to
> others, etc
> - a theory of the material, ecological and social worlds, governed by
> physical and causal mechanisms, inhabited by many other life forms, and
> human peers
> - theories of society and societal rôles, such as implied by the
> progression from student to apprentice to master, etc
>
> Without these underlying epistemological ideas, any learning remains first
> level, devoid of any deeper understanding, fundamentally syntactical in
> nature, interested only in the degree to which the training data is
> mimicked. Fundamentally machine learning, LLMs and ChatGPT remain
> “teacher’s pets”, and, if they are able to develop meta theories of
> learning with which to better succeed at their assigned task, their
> sophistication will be in how better to “satisfy their teacher”, rather
> than towards theories of leaning and knowledge themselves.
>
> Eliot,
> ___,,,^..^,,,___
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20230121/64d93705/attachment.html>


More information about the Squeak-dev mailing list