ncellier at ifrance.com
Wed Nov 7 16:41:57 UTC 2007
Paolo Bonzini <bonzini <at> gnu.org> writes:
> > I don't think so. In my experience most streams operate on collections
> > of a few dozen elements so testing data sizes between 5-100 seems
> > totally realistic to me (again, if you have evidence to the contrary I'd
> > like to see it). For example, if I just run a quick:
> > (SequenceableCollection allSubInstances collect:[:c| c size]) average
> > I end up with 45 elements. Now, granted this may not be the average size
> > of internal collections used for streams but since most streams are
> > transient it's hard to get an actual number for it.
> In my experience (it might be a little different in Squeak) most
> collection streams are WriteStreams. Most ReadStreams are file-based,
> and those are longer than 100 bytes.
Mostly agree with last Andreas mail.
And apologizes too for agressive over-reacting toward a master.
Answering me is yet trying to be constructive, thanks.
Paolo makes a good point too.
The optimized == nil trick is a Character or Byte Stream thing anyway.
- this let files, Files are long
- but ReadStream can be used in short String processing too
There I must inquire for performance drop!
Otherwise, I'm just playing with LazyStream ideas which would be more valuable
on large collections of course...
up to Paolo's
I thought i add some other interesting refs, but Pipe thread was so long...
I would write entirely with stream
select: [:e | e someTest];
collect: [:e | e square];
reject: [:e | e otherTest];
With a trick (in place modification of receiver) to handle pipe without
parenthesis. Much like a unix filter, processes are blocks, pipes are stream.
More information about the Squeak-dev