The read time for loading the image into memory seemed a little high to me, so I wrote a tiny benchmark.
The simple program attached can be used to compare reading performance of block reads using read(2) versus mmap(2). It counts the number of A's in the file specified on the command line.
On my G3/400 running MacOS 10.1.2, read and mmap performance seem roughly the same. On a 75MB file, this took between 2.2 and 2.7 seconds to run. (Note that my tests include rebooting to avoid cache hits.)
The interesting thing is that the although the read solution takes spends far more time making system calls (1200ms versus 260ms), the clock time is roughly the same.
Note too, that on my G3/400, reading a 75MB file takes only 1280ms. How does this compare to Squeak's image reading time?
-Eric
Note too, that on my G3/400, reading a 75MB file takes only 1280ms. How does this compare to Squeak's image reading time?
-Eric
Thanks for the file, I took it and ran some other tests.
To read a 16131972 byte image file takes 347 using PBReadSync, but using fread it takes say 279, but there are quite a few I/O to get the header bytes etc, so by converting to pure unix io calls I take the image read time from 1,852ms to say 588ms which means the PBReadSync has lots of overhead I now avoid. (mmm perhaps someone might note that I bet these files all get cached as I'm testing to skew my read times in 45MB a second, but 1.3 second is still 1.3 seconds).
Also this evening after a day with taxes I change the code to make the window invisible, then display on the first real update request. This makes things a bit nicer, since all the decisions about size and placement and contents have been made.
Note for mmapbuf = mmap((caddr_t) 0, len, PROT_READ, 0, file, 0);
Then we iterate over the 16131972 byte file 512 bytes at a time takes on the order of 64ms. Oh and watch out for the compiler trying to eliminate your iteration code because it notices you only really need the last value. volatile does have a purpose sometimes...
However if we attempt to do memmove to some allocated storage this takes just as long as doing a normal read into that storage. Mmm I sure there must be a way to exploit this?
PS vm_copy gives similar results...
Note I hope to distribute a real version of 3.2.1 this week since I've heard little about any issues with the betas
squeak-dev@lists.squeakfoundation.org