HI Eliot,
There's also the claimed understandability advantage, which is something, but somewhat offset by the conversion / compatibility issues users must face and endure. "Worthwhile" is definitely the key question we need to decide. For me, it seems worth it in theory because it seems like it shouldn't be too hard to convert. The only way to find out is to attempt to convert one of my most date-intensive applications (including database) see how it goes, see what issues arise.
The simplicity of the large integer utcMicroseconds representation trumps all the nonsense of breaking it down into sub components. In any case the 32-bit VM is communicating time up to the image as 64-bit large integer microseconds anyway. Not decomposing gives much faster instantiation, and very simple arithmetic (simply compare the utcMicroseconds).
I've been getting a good look at UTCDateAndTime and emailing Dave for the last few days. I like it for 64-bit and agree with your sentiments except for the old way being "nonsense". It was made in the early 2000's when 32-bit was all we had. It beat a LargeInteger based competitor I tried to use for a while so I switched.
Dave has been patient helping me get the Ma Serializer upgraded. I'm trying to find a way to preserve forward-compatibility in legacy systems, otherwise it means having to shut everything down and upgrade every client image. Simultaneously. Big bang style.
Note that it would be trivial to extend the representation with the decomposed elements, an d these could be nil initially and instantiated on demand. Forcing the large integer arithmetic to decompose on every instantiation would kill any performance advantage one might expect to get from using immediate instead of there large integer. And of course in 64-bits, utcMicroseconds is an immediate anyway.
That is an **amazing idea**! The code should decompose the new utcMicroseconds to the old smaller values on-demand but also the reverse. For UTC clients reading legacy data, the 'utcMicroseconds' variable will be mapped to nil upon materialization, after which the code would lazily calculate it on-demand. Brilliant!
Will definitely have to look into this..
Best, Chris