[squeak-dev] Thoughts on Xtreams
mkobetic at gmail.com
mkobetic at gmail.com
Tue Oct 6 00:31:01 UTC 2015
"Chris Cunnington"<brasspen at gmail.com> wrote:
> This has a functional feel because you cannot stop and examine things,
> because a filter has no state. It's just passing things on. Once it's
> composed, you need to wait until the end to see what happened. As such,
> it can handle infinite strings, as suggested in the SICP.
Debugging streams is definitely an issue that could use attention. To be fair I don't think it's any worse with Xtreams, than it is with the classic streams. Sure if you have an in-memory stream you just dive into the underlying collection, but that's the same with Xtreams. Frameworks like Altitude have very good reasons for not doing that however. Admittedly that doesn't change the fact that it's still challenging to debug those kinds of streams.
There are some basic tools that could be used in some circumstances. There's DuplicateRead/WriteStream that can be injected at any point of the stream stack/pipeline to copy whatever flows through it into another destination (Transcript, Stdout, file etc). Problem is, you're usually handed a ready made stack of streams and you shouldn't be making assumptions about its composition.
That however is exactly what a competent stream inspector needs to do. Ultimately it needs to break open the encapsulation, carefully step around the internals of whatever stream layer it's digging into without triggering a state change, and excavate whatever relevant bit of info there is. There is a poor man's skeleton of something like that in the printOn:/streamingPrintOn: methods, but it's too bare bones to be useful in many cases. TBH, I'm not convinced that's the right way to go about it either.
> I think the word here is chunking. Since HTTP/1.1 came out I imagine
> clients have been optimizing for this. Servers? Not so much. I imagine
> that servers have never worked very hard to send data with the fine
> level of granularity equivalent to what clients can receive.
> But I suppose this is really about buffer size and flushing frequency. I
> imagine Seaside could chunk its responses, but instead saves every
> response into one large buffer and then flushes once.
> With Altitude I can put 'self halt' in the middle of a page and watch
> half a a page render. The buffer size is set to 1K and flushes when the
> buffer is full, which doesn't take long.
I'd say it's actually the other way around, it's the servers that care about chunking. The protocol requires that the server either specifies the size of the response in bytes, or it chunks (I'm deliberately ignoring HTTP1.0 zombie). So for any dynamically computed response the server will want to chunk. Otherwise it can't send any bytes until the full response is generated. Note that you may have to actually generate several copies of the content in various stages of transfer encoding in order to get correct final byte size. Stream processing + chunking is really the only sane way to handle that IMO.
Browsers actually don't really care whether the response is chunked or not, they will partially render whatever fragment of the page they have as soon as they get it. Granted I'm handwaving over the much more complex reality of today's web pages. Regardless it's the server side where chunking enables sending incomplete response fragments.
More information about the Squeak-dev