+1 :-)
Best, Marcel Am 19.01.2018 12:51:26 schrieb Bert Freudenberg bert@freudenbergs.de: On 19 January 2018 at 05:30, Chris Muller <ma.chris.m@gmail.com [mailto:ma.chris.m@gmail.com]> wrote:
It wasn't clear to me that your bloated image issue was related to MCInfoProxy at all -- you said it showed ByteArray's and Bitmaps as the top, but the Proxy wasn't agitated until you ran the pointer-finder.
As I mentioned, the intention of the design is to allow purging the MC ancestry to recover that memory for building a smaller, tighter production image, without losing the ability to access it, transparently, if needed. Because no one would want to accidentally save a version with the ancestry gone (corrupt), right? This design requires no extra thinking, just a little extra waiting. Conceptually, that feels like a classic cache, to me. If there's a glitch in how it interacts with MC that could be improved, we should fix it. I'm curious what else was on that stack..
I've voiced my criticism of that design before, so I'll just point out that it's exactly this unpredictable nature of proxy interactions that makes it fragile.
I agree, though, it isn't needed on the UI menu, so we should remove it. It's something one would normally only call from a deployment/build script.
+1
If even Master Eliot is confused, we really should not expose it with one tempting click ;)
- Bert -
Best, Chris
On Thu, Jan 18, 2018 at 9:43 PM, Eliot Miranda <eliot.miranda@gmail.com [mailto:eliot.miranda@gmail.com]> wrote:
Hi Chris,
On Thu, Jan 18, 2018 at 7:20 PM, Chris Muller <asqueaker@gmail.com [mailto:asqueaker@gmail.com]> wrote:
Hi Eliot,
It sounds like you selected 'flush cached versions and ancestry' from the Repository menu of the Monticello Working Copy browser.
That's something that you would only want to do for a production deployment image where you aren't planning to do any more development in. It saves memory on ancestry while providing a dynamic (albeit, slow) means of loading it back on the rare case that one would need to do that in the production image ever again. The PointerFinder is not sensitive enough for proxies, so using it ends up agitating them into reification, forcing the dynamic reload of ancestry like you experienced.
What you want is simply, "flush cached versions" and then you'll never have that issue.
Can you explain? How does this create the issue? Why have an item like "flush cached versions and ancestry" if its dangerous. The implication of :cached" is that the information can be rebuilt when required. The ancestry is out there in the .mcz files. I don't understand what's going on here. feels like a bad design decision.
- Chris
On Thu, Jan 18, 2018 at 5:24 PM, Eliot Miranda <eliot.miranda@gmail.com [mailto:eliot.miranda@gmail.com]> wrote:
Hi All,
I've been experiencing image save slowdowns recently and finally my work image reached 1.%Gb and I thought I better take a look:
Sisyphus.Cog$ ls -lh SpurWork64.* save/SpurWork64-* -rw-r--r--@ 1 eliot staff 28M Jan 18 12:47 SpurWork64.changes -rw-r--r--@ 1 eliot staff 1.6G Jan 18 12:48 SpurWork64.image -rw-r--r--@ 1 eliot staff 28M Jan 18 12:03 save/SpurWork64-2018-01-18.changes -rw-r--r--@ 1 eliot staff 1.5G Jan 18 12:03 save/SpurWork64-2018-01-18.image
I ran a space analysis and found that Bitmap and ByteArray were the top two, so I looked for large Bitmaps. I found three that fit this criterion:
Bitmap allInstances select: [:bm| bm size >= 1000000 and: [bm ~~ Display bits]]
I inspected the three and did a chase pointers on one of them. As I did that suddenly a) the inspector on the Array became empty (still an array but zero elements) b) the progress bar for Downloading FlexibleVocabularies-who.NN appeared
I interrupted this and did a very cursory stack examination. Some object had not understood isLiteral and from there what looked like an attempt to turn this stub into a real object caused FlexibleVocabularies-who.NN to start to download.
I threw away the debugger, ran the GC and suddenly all my free space was back. So now on disc I have
Sisyphus.Cog$ ls -lh SpurWork64.* save/SpurWork64-* -rw-r--r--@ 1 eliot staff 28M Jan 18 15:17 SpurWork64.changes -rw-r--r--@ 1 eliot staff 57M Jan 18 15:17 SpurWork64.image -rw-r--r--@ 1 eliot staff 28M Jan 18 12:03 save/SpurWork64-2018-01-18.changes -rw-r--r--@ 1 eliot staff 1.5G Jan 18 12:03 save/SpurWork64-2018-01-18.image
What is going on here? There seems to be a very bad storage leak. Can we please discuss this? This doesn't seem like healthy behaviour at all :-)
_,,,^..^,,,_ best, Eliot
-- _,,,^..^,,,_ best, Eliot