[squeak-dev] reviewing ChatGPT's understanding of Smalltalk

Phil B pbpublist at gmail.com
Wed Jan 25 19:57:03 UTC 2023


On Tue, Jan 24, 2023 at 10:41 PM Chris Muller <asqueaker at gmail.com> wrote:

> On Sun, Jan 22, 2023 at 12:27 AM Phil B <pbpublist at gmail.com> wrote:
>
>> Jecel was explaining how the core of the system (i.e. the 'GPT' part,
>> which is the main neural net) worked.  There's also the 'Chat' part, which
>> is the front end where they'd handle things like incorporating session
>> state etc.  So it has a short-term memory of sorts but has no way to
>> persist it beyond an individual session currently, AFAIK.
>>
>
> From a new session, a user could at least "play back" their exact
> statements from the previous chat to arrive at a state identical to the
> previous.
>

That's most likely roughly what ChatGPT is doing within and when you resume
a session.  Though it's probably summarizing previous conversations to
minimize the amount of tokens used in order to maximize the amount of
history it can incorporate.


>
> The lack of convenient persistence probably only applies to the
> general public, out of an abundance of caution.
>

Unless they're doing something radically different than the publicly
disclosed GPT approach is capable of, it's a technical limitation:
1) Training the neural net on prior conversations on a per user/session
basis is prohibitively expensive (wall clock, compute, memory and storage)
2) There is a maximum number of tokens (iirc, 8k for ChatGPT) that it can
ingest at once during inference which  consists of any previous state and
your prompt.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20230125/baadb0f2/attachment.html>


More information about the Squeak-dev mailing list