'Real' zooming&panning

Andreas Raab andreas.raab at gmx.de
Wed Nov 5 07:54:21 UTC 2003


> > I don't disagree with how it should look. I'm just saying 
> > that it's not the job of the Canvas to do this. If the Morph
> > wants to draw itself differently than just scaling itself,
> > then it's the Morph's responsibility to do so.
> 
> Are you sure?  What if I wanted to render a Morphic world as though it
> were painted, or drawn with a pencil?  One approach would certainly be
> to have each morph take responsibility for drawing itself, perhaps
> with methods like #drawWithPencilOn:.  Another approach would be to
> substitute in a canvas that would override the line drawing/area
> filling methods to implement the desired style.  The latter has the
> benefit of localizing changes in one class instead of scattering them
> across the Morph hierarchy.

This is mostly a question of framework design and primarily anticipated use.
If we would be making a drawing program where we need fine-grained control
for the way in which objects are drawn in "stencil mode" then having a
#drawWithPencilOn: method would be entirely reasonable (one example for this
would be if we'd model different kinds of papers). However, given that we
_typically_ are not interested in this in Morphic (treating most of the
objects as "fully self-drawn" rather than composited on some simulated
paper) your pencil example would make more sense to put into the canvas.
[Completely OT, but I once wrote a "BluePrintCanvas" for Alan's demos which
did draw all of the objects as blue outlines - that's pretty much what
you're talking about].

So all in all, it really depends on how much control you want in each place.
Both approaches can make sense depending on where exactly you need the
fine-grained control and where you can substitute "generic behavior" that
will work "well enough". The rule of thumb is (as always) that simple things
should be simple and complex ones possible. So if a Morph wants to know
whether it is drawn in a scaled environment it should absolutely be able to
inquire on that property and draw itself differently (the same goes for your
pencil and my blueprint example). Fortunately, all of these things _are_
possible (though some of them require more effort than others).

> Of course, other complications arise... if the size that a handle
> is drawn is determined by the canvas, then the canvas must become
> involved in the event handling process to decide which morph a click
> should be handled by. 

Sigh. That's just one of the places in which Morphic is severely broken. The
inability to react to changes that affect your appearance to the user is a
real problem. If not for that, it would be trivial for the handles to resize
themselves whenever their parent is scaled. There may be a way of doing this
today using some crude workaround but not having this information easily
accessible is really a flaw of Morphic.

That said, NO, the canvas must NOT, NOT EVER become involved in event
handling. The canvas is a device that supports (essentially remote) drawing
operations and all of the state that's (during drawing) available in the
canvas should be (trivially) accessible through the morphic hierarchy.

Cheers,
  - Andreas




More information about the Squeak-dev mailing list