[squeak-dev] source.squeak.org upgrade
marcel.taeumel at hpi.de
Wed May 4 12:06:27 UTC 2022
I just made the squeak-app CI fail  by committing something to Trunk while the CI wanted to access source.squeak.org.
Consequently, we have a serious concurrency problem with connections to that SqueakSource instance at the moment. :-) Maybe it is related to #primitiveFileFlush in FilePlugin? Hmm....
Am 04.05.2022 10:36:03 schrieb Marcel Taeumel <marcel.taeumel at hpi.de>:
Commiting to Trunk still takes 1-2 minutes to then fail with a Network error that it could not access source.squeak.org/trunk.
Interestingly, as soon as the commit mail arrives in my inbox, the server is responsive again. In a parallel image, even the download stopped as if the entire SqueakSource image stopped responding just because something is waiting for something ... maybe related to a lock in the file system? Is it a flush operation that blocks the entire Squeak image?
Am 04.05.2022 09:26:39 schrieb Marcel Taeumel <marcel.taeumel at hpi.de>:
Hi Chris --
Thanks for looking into this. :-) I will report back.
Am 04.05.2022 05:00:19 schrieb Chris Muller <asqueaker at gmail.com>:
Sorry for the inconvenience!
I switched back to the aggressive caching strategy from before, full performance *should* be restored. I was trying a light caching strategy and expected slightly degraded performance only for the first few minutes after the server started, when it not only doesn't have anything cached, but performs its recovery process (in a lower background process, but still competitive for CPU). But all that finished, so it shouldn't have been THAT slow. The weird thing is the CPU was pegged at 100% even after it finished the recovery, but I couldn't find anything it was doing. There's nothing it should've been doing, and running a MessageTally spyAllOn: [ (Delay forSeconds: 10) wait ] (in the running image via RFB connection) just showed basically Seaside waiting for a request. This is why I'm normally conservative about wanting to use new VM's in production, but maybe this is a productive test then if it's the release-candidate and it uncovers any issues.
So, I won't relax just yet, but I'm not working in trunk a lot, so I'm not as aware of the ebb and flow of the performance level. Those performance reports were immensely helpful. If issues arise again, please let me know.
On Tue, May 3, 2022 at 7:14 AM Marcel Taeumel <marcel.taeumel at hpi.de [mailto:marcel.taeumel at hpi.de]> wrote:
Hi Chris --
It takes about 1-2 minutes to commit something to Trunk at the moment. Then I get a NetworkError in the image. Thus, it is currently not feasible to commit anything to Trunk.
Let's see if we can resolve this until tomorrow. Then I will move the deadline for the 6.0 Feature Freeze for a week or two so that we can work on this issue..
Am 03.05.2022 12:09:11 schrieb Marcel Taeumel <marcel.taeumel at hpi.de [mailto:marcel.taeumel at hpi.de]>:
Hi Chris --
Okay. Let's hope that the connectivity issues will resolve themselves in a few days. They still have not. Usually, updating a Trunk image from 19432 should take about 12 minutes. At the moment, it hits the 20-minutes marker. On my local machine, it just took about 40 minutes.
On the bright side, there are no "NetworkErrors" or other things. Code loading works. It is just kind of slow.
Am 02.05.2022 18:10:42 schrieb Chris Muller <ma.chris.m at gmail.com [mailto:ma.chris.m at gmail.com]>:
Okay, it is catching back up on recovering a bunch of versions since
last March and code loading would be in competition with that. I
might have to consider a different design yet. But it should clear up
pretty soon, although it looks like it's currently doing VMMaker which
are large Version objects. I *thought* I was only indexing the
Versions in 'trunk', but now I can't find where I restricted that.
I'll keep my eye on it. Thanks for your patience. Let me know if it
gets too broken, I can always turn off the indexing again.
On Mon, May 2, 2022 at 10:42 AM Chris Muller wrote:
> Hi Marcel,
> I didn't know about --headless, I had always used "-vm display=none",
> but now I've changed it to -vm-display-null which seems to be working
> with that 20220419 vm. I suppose we have about 3 different ways to
> launch in headless mode.
> That's strange about the builds. I can't think of anything that would
> be slowing down code loading, except I am seeing the VM using a lot of
> CPU right now when I don't think it should be. I just bounced it and
> now I see a process called "driveclient" using about half the CPU, and
> the server is not back up. Checking...
> On Mon, May 2, 2022 at 3:55 AM Marcel Taeumel wrote:
> > Hi Chris --
> > Thanks for upgrading our backend to Squeak 5.3! Note that the --headless mode in the 20220419 VM is not working. We will do another RC for that. The bug itself got already fixed . Squeak-ffi tests also show the code-loading issue .
> > Hmmm... access to source.squeak.org [http://source.squeak.org] is flakey again. squeak-app builds  are not running through because code loading times out.
> > Can you see from the logs what is happening?
> > Best,
> > Marcel
> >  https://github.com/squeak-smalltalk/squeak-app/actions [https://github.com/squeak-smalltalk/squeak-app/actions]
> >  https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/2d7105db755928fd90823d264fc64e17b1c57ad4 [https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/2d7105db755928fd90823d264fc64e17b1c57ad4]
> >  https://github.com/marceltaeumel/squeak-ffi/actions [https://github.com/marceltaeumel/squeak-ffi/actions]
> > Am 02.05.2022 03:42:46 schrieb Chris Muller :
> > Hi all,
> > Revision history is working in the IDE again. The SqueakSource instance behind source.squeak.org [http://source.squeak.org] has been upgraded to Squeak 5.3 and the 20220419 vm, and now requires less RAM thanks to some design optimizations.
> > - Chris
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Squeak-dev