[squeak-dev] What is the task of NullEncoder?

Marcel Weiher marcel.weiher at gmail.com
Mon Oct 2 12:09:37 UTC 2017

> Hi Marcel, Tobias,
> I perfectly understand what an Encoder is. OK I said it transforms stream instead of filters stream because I'm not academic enough ;).
> I agree that the pattern has a lot of potential, though parallelism still is a problem in Smalltalk processing model.

For the potential, see my other message, which was delayed due to me sending it from the wrong account, it should have come before my answer to you.  The model is mostly synchronous, so no parallelism, but some level of concurrent processing can be added and is useful for asynchronous programming (network stuff, for example).

> But:
> In a sense, the canvas did handle a stream of graphics instructions (draw a line, a circle, fill a rectangle etc…).


> Even, if we don't really reify those instructions, and tell (canvas write: (Line from: ... to: ....)) but rather use a more direct (canvas drawLineFrom:to:...) send.
> By making it an Encoder, it now handles both a stream of graphics instructions and a stream of objects (that can appropriately convert themselves to a stream of graphic instructions thru double dispatching).
> This is a metonymy.

Not really.  Having both a message-based and an object-based interface is somewhat common in this model, with the double dispatch deconstructing objects into sets of message-sends (with further object parameters) where necessary.  But yes, that’s always a bit of a tension.

> I will repeat why it does not sound legitimate to me:
> First, a metonymy is obscuring responsibilities.
> Either that's an un-necessary indirection, because objects already know how to draw themselves, and it composes well already, because an object will ask the objects that it is composed of to render themselves.

Of course the Canvas (and presumably other parts of the system) already follow this kind of pattern, as a pattern.  The “Encoder” (I have struggled with naming, because it combines the role of a filter and a stream and some visitor-ish-ness), formalizes this pattern.  The benefits are the usual ones:  pluggability, documentation (if something is a subclass of X, I know what to expect), reuse, lower cognitive overhead, blah blah.  But that would mean more widespread adoption, and considering the fact that this stuff has lingered in the image for 15+ years the need may just not be there..

> Or once we want to use it as a filter for a stream of arbitrary objects, we get a problem of composability (understand composing as giving a specific layout to the objects we want to render).

Layout is not a responsibility of the canvas, the canvas needs to reproduce a layout that’s been created.

> So we have to give greater responsabilities to the dumb canvas for this composition to happen.
> I showed that the only place where we make use of the double dispatching technic exhibit this problem of composability (we can't render in PostScript a Morph composed of BookMorph, because we can't put several pages into a page…).

When I left it, a BookMorph would behave appropriately when embedded:  it would “print” the visible page.  Is that no longer the case?  And having the “encoder” stick around is the way of remembering the top-level context even while adapting to specific objects in a nested hierarchy.  Really quite similar to how super and self interact.

> --------
> Note about text rendering: we generally have to recompose the layout for different target (for example, if we want to render a BookMorph on A4 paper with specific margins...). For this composition to take place, we need to know the font metrics, and use some specific rules for margins, alignment, tab stops, etc... That's what a CompositionScanner does.
> I fail to see where those PostScript font metrics are in Squeak?

If you recompose/reflow a text when printing, that’s a serious bug.  This was a really long time ago, but IIRC fonts and font metrics were a significant problem.  

> Rendering on PostScript is not an exception. If we are able to share some fonts then we can omit the composition step for most simple cases (like generating an EPS figure). But if we start with a bitmap font, rendering in PostScript will be very rough indeed. 

A “true WYSIWYG” approach would have been to encode the screen fonts as Type 3 bitmap fonts, or just dump the whole bitmap.  But that would have sucked and wasn’t what Dan wanted.  Dan wanted to have a “nice” printed version of the paper that had been composed in a BookMorph.  Another would have been to have true printer compatible Type 1 fonts in Squeak. Yeah, right :-)

So I hacked it:  I chose a set of Postscript fonts that were as close to an approximation of the look and metrics of the screen fonts used, concentrating the ones used in the paper.  I also added a jshow command that would justify text on the printer, because the metrics obviously wouldn’t match perfectly.  For justified text, that’s very noticeable.

The whole thing is decidedly “best effort” and was produced with a very specific goal and time-constraints.  To do it right would have required massive changes to Squeak’s graphic subsystem, changes that were very much out of scope, and probably still are.  That said, I did produce a version of Squeak that rendered its screen via roughly this mechanism on NeXT's DisplayPostscript. It was *epic*, but also pretty wonky, because of course the metrics didn’t match.

> For this reason, generating the PostScript in VW smalltalk goes thru a higher level indirection, the Document, which is somehow responsible for composing the layout for an arbitrary input stream of objects.

> It has to cooperate with a GraphicsContext (the quivalent of our Canvas) for transforming fonts to nearest PostScript equivalent, measuring, etc…

Transforming fonts is done on the class side of PostscriptCanvas, measuring is done in the printer when needed.  Again, the additional infrastructure that would have been required to do this right would have been substantial, with an ongoing support burden and licensing headaches (AFM files etc.).  And of course today the “correct” answer is to use the platform’s device-independent rendering API, which will take care of these sorts of problems.  But that’s not the spirit of the system, last I checked.

> VW has a PostScriptContext which is a specific GraphicsContext for low level rendering instructions, but that's not where everything takes place. The Document would handle the DSC for example (that sounds quite logical no?).

Document handling DSC?  That sounds wrong, but I am not familiar with the details.

> Also note that a Document is not conceptually bound to a Postscript target, it could be anything, even LaTeX or Word backend, in which case it could eventually delegate the composition phase (which could work for a flow of text and small graphics, but would be more delicate for math expressions, tables and figures though).

Yeah, again that’s at a whole different level.  



More information about the Squeak-dev mailing list