[squeak-dev] The Trunk: Collections-fn.825.mcz

karl ramberg karlramberg at gmail.com
Sat Apr 13 08:50:39 UTC 2019


+1
Implicit conversion like this is hard to remember an makes code hard to
understand.
For example C is used in many different number system and is not the same
in either.
C in roman numerals is not equal to ASCII c or C or hexadecimal C.

Can of worms warning.

Best,
Karl




On Sat, Apr 13, 2019 at 9:17 AM Jakob Reschke <forums.jakob at resfarm.de>
wrote:

> I am not so fond of this change. It feels like a slope towards JavaScript,
> which has (not so) funny articles about the ridiculous implicit type
> conversions. There was a change last year that disallowed putting numbers
> (bytes) in ByteStrings and vice versa. This change goes kind of in the
> other direction.
>
> The #< of Character seems to support comparisons with numbers. But this
> may never have been intended as it is a side effect of how the method is
> implemented (comparing the #asInteger values of the Characters)? If it were
> implemented in terms of #codePoint instead of #asInteger, it would not
> support comparison with numbers, but still work for Characters.
>
> Yet I see the compatibility trouble and the long history of
> adaptToNumber:andSend:.
>
> I would prefer if people wrote more asInteger/asNumber or asCharacter if
> they need to ensure the types (because they expect that sometimes numbers
> come in instead of characters, or the other way around), instead of relying
> on implicit type conversions. It also forces them to think about which kind
> of character to number conversion they want, referring to what Tim wrote:
> are digit characters their number counterparts or their unicode code points?
>
> Am Fr., 12. Apr. 2019 um 19:14 Uhr schrieb tim Rowledge <tim at rowledge.org
> >:
>
>>
>>
>> > On 2019-04-12, at 12:30 AM, commits at source.squeak.org wrote:
>> >
>> >
>> > Just like strings, characters should convert themselves to an integer
>> when involved in arithmetic with a number.
>>
>> Well, ok I guess. BUT why is the conversion simply to the internal bits
>> of the character representation? Surely digit characters ought to convert
>> to the number they directly represent? And wouldn't it be more correct to
>> convert all the other chars via the encoding in use? Unless, I suppose we
>> are actually using unicode already in which case one might make a decent
>> argument that that is The Right One.
>>
>> tim
>> --
>> tim Rowledge; tim at rowledge.org; http://www.rowledge.org/tim
>> Strange OpCodes: LA: Lockout Access
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20190413/cdc4dc66/attachment.html>


More information about the Squeak-dev mailing list