On Mon, May 26, 2014 at 01:09:06PM -0500, Chris Muller wrote:
Hi Dave, as someone who works with large systems in Squeak, I'm always interested in _storage efficiency_ as much as execution efficiency.
DateAndTime, in particular, is a very common domain element with a high potential for there to be many millions of instances in a given domain model.
Apps which have millions of objects with merely a Date attribute can canonicalize them. And, apps which have millions of Time objects can canonicalize them.
But LargeInteger's are not easy to canonicalize (e.g., utcMicroseconds). So a database system with millions of DateAndTime's would have to do _two_ reads for every DateAndTime instance instead of just one today (because SmallIntegers are immediate, while LargeIntegers require their own storage buffer).
Hi Chris,
I do not have a lot of experience with database systems, so I would like to better understand the issue for storage of large numeric values.
I was under the impression that modern SQL databases provide direct support for large integer data types (e.g. bigint for SQL server), and my assumption was that object databases such as Magma or GemStone would make this a non-issue. Why is it that a large (64 bit) integer should be any more or less difficult to persist than a small integer?
This may be a dumb question but I am curious.
Thanks, Dave