[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Chris Muller asqueaker at gmail.com
Tue Jan 17 06:36:14 UTC 2023


> Back to ChatGPT - its Smalltalk knowledge is encoded in the contents of
> the matrices I mentioned during the training phase. Thanks to the novel
> architecture I mentioned, when you use it in the inference phase it can
> hold "session information" to allow its responses to be coherent with
> what you and it have said before. But it can't learn during the
> inference phase - the matrices are not changed by your chat.
>

Yes!  I interrogated it deeply about that very "session" in a separate
follow-up conversation (link below) the day after the first one.  For
example, I asked it:

    "Could the formation of the session state and its interaction with the
language model cause the appearance of 'learning'?"

This conversation was much more interesting than the first one about
Smalltalk.  Because of your fantastic explanation, I've decided to share it
after all, here it is.

   https://sharegpt.com/c/Zh9Onhc



> Humans learn from very small training sets - you don't have to show a
> child thousands of pictures of cats before they understand what a cat
> is. Humans also don't have separate training and inference phases.
> Having an AI with these features is a simple matter of programming. We
> might see real progress in the next few years. But we are not there yet.


Also confirmed by ChatGPT in its final two responses, above!   :)

Regards,
  Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20230117/acd3fcd3/attachment.html>


More information about the Squeak-dev mailing list