My take is that if we don't know *anything* about the subclass, we have to assume that #size can be way more expensive than a single iteration of #do: (e.g. a linked list).
I don't understand this. How could #size ever be "way more expensive" than a single iteration of #do:? Based on the implementation in Collection, that's impossible..
So using #do: is the sensible default implementation for #isEmpty. If OTOH a subclass has a very expensive #do:, then it's reasonable to assume that it would avoid doing much work if it is in fact empty.
If a Collection subclass did rely on an implementation detail of its superclass performance-wise then this is unfortunate,
Well that's precisely what this change does! We're trying to fix some hypothetical subclass to "rely" on this implementation detail way up in Collection..
but easy to fix by implementing #isEmpty in terms of #size.
You ignored the "silent damage" argument, that someone wouldn't even KNOW it needs fixing.
Besides, the fix should be made in the subclass which doesn't have a fast #size.
Folks there is NO BASIS at the level of Collection for assuming that do: [ : each | ^ false ] is faster than ^self size=0. In fact, the proper assumption is the opposite -- that #size is the optimized method, not #do:.
We shouldn't let that get in the way of making our base libraries more elegant.
It makes them LESS elegant, because of 1) the violation of the assumption that #size is faster than #do:, plus 2) the associated duplication of code that will be required across multiple subclasses just to recover that original faster implementation, which they used to inherit. Doesn't that matter?
I don't think the burden of proof should be on the legacy method, but on the new implementation which purports some improvement. This change sounded good in the ivory towser, but can anyone identify *one single place* in the real-world where this change is beneficial?
Because the detractors are potentially very real...