[squeak-dev] The Inbox: Collections-ul.871.mcz
Chris Muller
asqueaker at gmail.com
Sat Jan 25 05:49:43 UTC 2020
>
> > > > That's better, but it still has that same fundamental
> problem. Every time a developer makes a HashedCollection of a
> known-at-runtime size (e.g., in a variable), they're forced to choose
> between execution performance pain or code pain.
> > > >
> > > > {
> > > > '[ Dictionary new ]'->'100% of baseline rate,
> 27,600,000 per second. 36.2 nanoseconds per run. 11.33547 % GC time.'
> > > >
> > > > "performance pain?"
> > > > '[ Dictionary new: 1 ]'->'60% of baseline rate,
> 16,600,000 per second. 60.1 nanoseconds per run. 5.61888 % GC time.'
> > > > '[ Dictionary new: 2 ]'->'61% of baseline rate,
> 16,900,000 per second. 59.2 nanoseconds per run. 5.67886 % GC time.'
> > > > '[ Dictionary new: 3 ]'->'59% of baseline rate,
> 16,300,000 per second. 61.5 nanoseconds per run. 6.77864 % GC time.'
> > >
> > > Even if there's a performance overhead, you use less
> memory.
> > >
> > >
> > > But #new: is about optimization along *both* of those
> dimensions. Imagine how complicated a "manpage" for #new: would have to be
> if it weren't. #new: must _never_ perform significantly worse than #new
> (for sizes <= the default), because it would either trick or force
> developers into writing
> > less-performant
> > > code, or into acknowledging Squeak's internal Dictionary
> implementation in their own code. It feels like an API-design bug.
> >
> > It was okay for quite a long time. :)
> >
> >
> > ... until this proposal which changed the smallest internal array size
> from 5 to 3 and introduced the above anomaly. I assume you made that
> change because you felt something else wasn't okay (to which I agree!).
>
> Capacity of 3 has been available through #compact, but #sizeFor: still
> doesn't return it due to a bug.
That bug has been saving us from the anomaly now exposed by your
improvements.
> > > > "into #sizeFor:"
>
> >
> > > > '[ Dictionary new: 4 ]'->'57% of baseline rate,
> 15,800,000 per second. 63.5 nanoseconds per run. 7.87685 % GC time.'
> > >
> > > Starting from 4, you also save time by avoiding growing,
> which is
> > > more significant than what you "lose" during instance
> creation.
> > >
> > >
> > > Except my Dictionary is never going to grow.
> >
> > So, we actually say the same thing.
> >
> >
> > Did we? You were arguing about saving time with growing, which I can
> never gain. I only lose during instance creation...
>
> We did. You just quoted me: "save time by _avoiding_ growing".
>
Oh, okay, but that's not relevant. Our whole discussion is about fixing
when the Request needs to make Dictionary's with 1 or 2 elements (in
Collectoins-ul.871) and 3 (in trunk), not 4.
> > > In case it helps bring clarity, my scenario is the GraphQL
> server. As a Request comes in, the server will know, depending on the
> type, how many named arguments to expect (for most "normal" schema's, 0 to
> 4, but it can define any number it wants). So it creates right sized
> Dictionary to hold them
> > all, and will
> > > never grow beyond that. I simply don't want the server to have
> to do extra work when **vast majority** of requests will have fewer than 4
> arguments.
> > >
> > > We could get rid of the anomaly by changing #new to ^self
> new: 3.
> > >
> > >
> > > Yes, I'd be fine with that solution, too! For me, it's only
> about violation of #new:'s contract.
> >
> > I don't see any broken contract here. It may be surprising to see
> > somewhat worse performance with #new: than with #new (25
> nanoseconds per
> > instance according to your measurements),
> >
> >
> > a 40% hit...
>
> Only if you keep that Dictionary empty.
No, I'm storing one element in (Dictionary new: 1) and two in (Dictionary
new: 2). But those are runtime sizes, those Dictionary's are created from
a variable in the code. I simply write #new: in the code because I can
know the size up front, so its (*supposed to be!*) faster to pre-allocate
to the correct size via #new:. But for the millions with 1 or 2 arguments,
it'd have been better to ignore #new: "optimization" there, for simply
#new. That's crazy.
I'd like to see a manpage-quality comment explaining this in
HashedCollection class>>#new:. Either that, or we need your fixed version
of my fix, please. :)
> And if you decide to do so, then
> why create it at all? :)
>
Right, I don't. 0 arguments creates no Dictionary.
>
> >
> > but both of those methods do
> > what they should. Just because there was an easy optimization
> applied to
> > #new, nothing was broken IMHO.
> >
> >
> > ... for using #new: that's supposed to be for optimization!
>
^^^ this is kind of the key point, what about this?
> >
> > So if you were in my shoes (which, if this goes to trunk as-is
> eventually you will be!), would you take a 40% hit in performance-critical
> code needing to instantiate a Dictionary with:
> >
> > Dictionary new: runtimeSize
> >
> > Or, actually violate encapsulation to preserve performance?
> >
> > sz>3 ifTrue: [Dictionary new: sz] ifFalse: [Dictionary new]
>
> I don't see the broken encapsulation there.
Here:
-------> sz>3 <---- ifTrue: [Dictionary new: sz] ifFalse:
[Dictionary new]
you would introduce a dependency on the private, internal initial size
employed by Dictionary (e.g., breaking its encapsulation). I want to keep
the GraphQL framework as dialect-independent as possible, I can't bring
myself to write that, and I hope you wouldn't either.
> That's just black box
> optimization based on measurements as long as you don't look under the
> hood, isn't it?
>
Only for as long as what's under Dictionary's hood doesn't change and
quietly break you. It could end up *just a bit slower* enough to cost you,
but reamain under your radar. The worse code that was once as an
optimization will have become insidious.
>
> > ?
> >
> > My past observations have been that you like and appreciate the
> most-efficient performing code, so I'm truly curious!
>
> I like optimizations where the cost can be considered smaller than the
> benefits.
> For example, in TextDiffBuilder, I even sacrificed legibility because the
> extra complexity is local to a single method: #lcsFor:and:, and the
> benefits are measurable. The previous implementations are available in
> the Versions Browser, which can help understanding how and why the code
> evolved into its current form.
> There are plenty of cases in the Collections hierarchy where methods could
> be overridden in various subclasses to gain a few percents, but the
> changes would be extensive and the code would become very hard to
> maintain.
>
The issue is about the quality of Squeak's API as it relates to
user-expectations (#new vs #new:), not gaining a few percents.
Best,
Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20200124/473cc35c/attachment.html>
More information about the Squeak-dev
mailing list
|