[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Stéphane Rollandin lecteur at zogotounga.net
Wed Jan 18 09:59:50 UTC 2023


> ChatGPT: I do not have personal beliefs or emotions, so I do not have 
> the ability to "know" something to be true or false. However, based on 
> the training data I have been trained on, my responses are influenced by 
> the information I have seen in the past.
> 
> Our ability to “know” is based neither in our beliefs nor our emotions. 
> A weak theory of mind. It keeps on stating that it can make semantically 
> meaningful responses, responses that are so consistent the appear to be 
> a single predefined response, while many examples show it can certainly 
> generate nonsense.  Have you tried to prove further its “notions” of 
> meaningfulness, its ability to be meaningful, and its “notions” of its 
> fallibility?

Also, its use of "I" implies it has some kind of self-awareness, which 
of course it has not. That's a pretty ugly trick IMO.

Instead of having it issue apologies for being wrong, or poor statements 
like "I do not have personal beliefs", its programmers should instead 
make it clear what kind of system we are dealing with, and have it emit 
things like "remember you are interacting with an automated tool, not a 
conscious being".

This of course is way contra the AI hype, and quite a downer, but doing 
as they do now and play the game of confusing the user for the sake of 
playing HAL 9000, while said user already has to sort the nonsense from 
the correct in the answer, is ethically questionable.

People will believe this tools are sentient. Designers should take that 
very seriously, and help mitigate the confusion, instead of playing with it.

Stef


More information about the Squeak-dev mailing list