[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Eliot Miranda eliot.miranda at gmail.com
Wed Jan 18 02:18:57 UTC 2023



> On Jan 16, 2023, at 10:36 PM, Chris Muller <asqueaker at gmail.com> wrote:
> 
> 
> 
>> Back to ChatGPT - its Smalltalk knowledge is encoded in the contents of
>> the matrices I mentioned during the training phase. Thanks to the novel
>> architecture I mentioned, when you use it in the inference phase it can
>> hold "session information" to allow its responses to be coherent with
>> what you and it have said before. But it can't learn during the
>> inference phase - the matrices are not changed by your chat.
> 
> Yes!  I interrogated it deeply about that very "session" in a separate follow-up conversation (link below) the day after the first one.  For example, I asked it:
> 
>     "Could the formation of the session state and its interaction with the language model cause the appearance of 'learning'?"
> 
> This conversation was much more interesting than the first one about Smalltalk.  Because of your fantastic explanation, I've decided to share it after all, here it is.
> 
>    https://sharegpt.com/c/Zh9Onhc

ChatGPT: I do not have personal beliefs or emotions, so I do not have the ability to "know" something to be true or false. However, based on the training data I have been trained on, my responses are influenced by the information I have seen in the past.

Our ability to “know” is based neither in our beliefs nor our emotions. A weak theory of mind. It keeps on stating that it can make semantically meaningful responses, responses that are so consistent the appear to be a single predefined response, while many examples show it can certainly generate nonsense.  Have you tried to prove further its “notions” of meaningfulness, its ability to be meaningful, and its “notions” of its fallibility?

_,,,^..^,,,_ (phone)
> 
>  
>> Humans learn from very small training sets - you don't have to show a
>> child thousands of pictures of cats before they understand what a cat
>> is. Humans also don't have separate training and inference phases.
>> Having an AI with these features is a simple matter of programming. We
>> might see real progress in the next few years. But we are not there yet.
> 
> Also confirmed by ChatGPT in its final two responses, above!   :)
> 
> Regards,
>   Chris
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20230117/4f8fb1a3/attachment-0001.html>


More information about the Squeak-dev mailing list