Morphic Design Philosophy (ConnectableMorph?)

Paul Fernhout pdfernhout at kurtz-fernhout.com
Sat Feb 26 18:11:36 UTC 2000


Stefan Matthias Aust wrote:
> I was more refering to the fact that alone the class Morph has more than
> 500 !!! methods.  How does players and morphs interact.  Did you ever tried
> to understand how the HandMorph (229 methods!) works and interacts with
> other morphhs?  Did you notice that (probably because of lacing
> documentation) people solve common problems like sizing submorphs of a
> resized morph in at least four different ways?
> 
> You're right, you can figure out the basics yourself but the problems start
> with all the details.  For example, I'm pretty sure that the balloon help
> stufff is still too complex.  Did anybody noticed that you cannot have one
> help per hand as it IMHO should?

This is exactly what I mean when I think of Morphic as "bloated". There
is so much here, compared to other Smalltalks as I remember them, that I
find it difficult to know where to start.

If the clutter is unavoidable, one possibility is that Morphic could be
divided into "Proto" classes and intended use classed with the current
names. (The Delphi Visual Component Library is organized somewhat like
this.) The intended use classes would have the common API methods from
the Proto class reimplimented as just passing them to super. So,
"ProtoMorph" would have endless clutter, and "Morph" would just have API
methods.  This might create some confusion and delays of its own of
course. One would have to decide whether ProtoListBox inherits from
Morph or ProtoMorph.

Of course, one could use wrappers, like VisualWorks, but I actually
found that the most awful part of that GUI system so I wouldn't
recommend it.

One could also use "private" methods like in VisualWorks or VisualAge,
but I don't like that for various reasons (mostly because of the
increased complexity of browsing).

But I think the best solution is to refactor or do some rearchitecting
like you suggest with the more limited enclosure model. Obviously,
overfactoring can make it hard to understand too...

> I still think that you can do things a little bit simpler than in morphic
> and still get the the same or similar effect.  Morphic is great to learn from.

I agree, although this is more a gut feeling. I never felt this level of
complexity say in WindowBuilder/Pro or ST/V.

> >For example, Stefan proposes that not all Morphs should be
> >able to have sub-morphs.  Is this a good idea?  bad?
> 
> Well, as you probably assume, I think its a good idea because there're a
> few but common classes which are end nodes in the Morph hierarchy.
> 
> Let's take a StringMorphs as part of a ListMorph as an example.  It doesn't
> need all the logic for supporting submorphs.  It's not only the space taken
> by the submorphs instance variable but also the code in calculating the
> bounds which has to take in account that submorphs may be outside the
> morphs bounding box.
> 
> Let's also look at Morphs like system windows or buttons.  They look like
> they could use the general submorph framework.  But try what happens if you
> add more submorphs to a button. It doesn't work.  It knows that it has
> exactly one submorph.  For windows, the subpanes (aka submorphs) are stored
> also explicitly so you don't mix them up with the rectangle morphs for the
> title bar.  Here I think, morphs without the general framework but with
> code for their specific needs is better.
> 
> If you want to add more than a morph to a button, use a Panel or Box morph
> which would make one composite morph out of simple morphs.

Excellent analysis.

However, I do like the idea of arbitrary composition without requiring
special containers. I have done this before in simulations by attaching
things by "connections".  This is a non-hierarchical concept. So, for
example, consider a system where you can link a StringMorph to
ListMorph, but not by embedding one in the other, or by embedding both
in a container.  The model I like is that you take a "connector" and
link the two together, like physically bolting a pipe between the two. 
When you move a widget, all connected widgets move with it. If any
widget is "pinned" to the background or an immovable object, none of
them can move. This pipe can be made "invisible" (except in some design
mode), which would lead to being able to drag around a list box with a
label halfway across the screen with the connection not obvious until
you moved one or the other.

I am not sure if this connection idea could totally replace containment.
It creates an extra step of having to link things, rather than drop one
on another. However, it is very general. If it could not replace
containment, then we could have your model of containment with this
model of connection as a supplement.

The pseudo code for connections goes something like this:

I, a visible object, want to move due to a mouse drag or a timer event.
Ask everyone I'm connected to if they can move (recursively with a local
flag or passed collection to prevent looping, or iteratively if a
network collection is already maintained).  Reasons things can't move
might be that someone's mouse is holding them or they are pinned to the
background, or their neighbor can't move. If everyone can move, then
tell everyone the delta and let them move.
This really requires double buffering to look nice. 

More sophisticated versions might submit a delta for approval to the
network and get back the same or a smaller delta (if say a part would be
somehow blocked by some boundary) or even a larger delta (if a sub-part
was being pulled in somewhere based on its proximity to some area). This
approach could also be generalized to approving rotation around an axis.
Connections could also sometimes be made elastic instead of static and
then they woudl slowly drag other parts along in a bouncy way. Resizing
these networks of objects would have to be finessed somehow, by allowing
the objects to more independently of each other temporarily.

A similar recursive approach can be used to determine a bounding box for
a network, although in practice for hit detection you can just have
every widget in the system do hit detection in a layered approach. Of
course, this may also be less efficient than a hierarchical set of
bounding boxes and more limited submorph event dispatching, and leads to
issues of parts of two "things" potentially being interleaved in odd
ways when they overlap. This might be resolvable with a sorting system
based on "thing" closeness to the top of some list. Of course, this
assumes you mind several widgets receiving the same mouse event -- they
could also just decide amongst themselves who wants it.

Every visible object belongs to a "network", defining a "composite
thing". Abstracting this, "network objects" could be useful for
broadcasting information to all the objects in a connected network --
for example, in a network of components that make a MP3 player, an "off
switch" might broadcast to the network, and all connected components
might become inactive. Bookkeeping of network objects is a bit complex.
When you break or make a connection you need to check if this splits a
network or merges two. When you merge two networks together, you need to
change one set of objects to point to the other's network object. Also,
when you split a network into two, you need to create a new network
object and assign some objects ot the new network object.  

It is not clear to me yet if there would be in total less code for
connecting than for embedding. Networks do simplify the dependency
programming model -- anybody on the network can get a message. This
makes it easy to set up component interactions, even if you do not know
what components will be added to the network. Regular Smalltalk
dependencies can do this too -- this is just a twist on this idea. 

One simple extension is "invisible" data sources. Delphi has something
like this for defining database access via invisible dataset components.
You could get really fancy and have meta-connectors as bridges or
routers that connect visible networks without merging them logically.
This adds other complexity but may make intranetwork event dispatching
and related programming easier. For example, an off switch attached to
the MP3 player network might only turn off components on its network,
but several networks making a complete stereo still can move together.
This could also be handled by broadcasting more sophisticated events on
a larger network -- like #MP3Off or #off with a component reference
instead of just #off.

I first developed this sort of connection/network apporach for a
simulation of self-replicating robots around 1987. The "mover" unit
would push around all the parts attached to its network. Parts would
only be active if there was battery attached to the network. Controllers
could only send commands to parts on the same network, but could send
instructions of any component they could access. Sam Adams keeps
suggesting I implement this in Morphic, to have windows with arms that
steal bits from other windows, etc. So perhaps, in the interest of
giving Morphic a fairer shake, I might try to add connectors to
"Morphic" to see how hard it is.

=== example of using this hypothetical connection based system ====

Here is an example of programming such a system if it existed in Morphic
for a button and text, similar to David Smith's excellent example posted
a while earlier, and "borrowing" some of his code. Let's assume all
Morphs inherit from "ConnectableMorph", and we have a few other neat new
classes like "HiddenModelMorph" and "NetworkMorph" "ConnectionMorph" and
"ConnectionApplicationBuilder".

Here is a simple ad-hoc approach.

First, you need to make the visible components in Morphic. You make a
new button. You make a new text. You make a new "ConnectionMorph"
connector and snap it between the two of them. You decide you want a
background, so you make a rectangle. You slide that rectangle under the
button and text field. You make another connector and attach the
rectangle to one of the components (it doesn't matter which). Now you
have a physical thing you can drag around. You hide the connectors by
clicking on one of the objects and selecting "hide network". 

Second, you need to add behavior. In this case, you want to make a
button press open a browser on the class named in the text field.  

Where does this behavior go? There are several possibilities. Here is
one.

You inspect the text morph and make sure its name is "#text". 
You inspect the button and make it evaluate something like this block
when clicked:
  [:widget | Browser newOnClass: 
     (Smalltalk at: (widget network find: #text) contents string
withBlanksTrimmed asSymbol 
        ifAbsent: [^nil])]

Note, that this implies Morphs that can be inspected and have fields
that can evaluate code on events (or in the next section, send events to
the network).

Here is a more complex class based approach defining the behavior in a
method. 

Build the same three component system as above.
You make a fourth "hidden" widget that represents an instance of a new
class, called "ButtonAndTextViaConnections" inheriting from
"HiddenModelMorph". 
You attach that hidden widget to any of the three other components via a
connection. 
You then add two methods something like:

ButtonAndTextViaConnections>>network: network event: anEvent
  anEvent type == #click ifTrue: 
    [self launchClassBrowser: (network find: #text) contents].
  
ButtonAndTextViaConnections>>launchClassBrowser: name
  Browser newOnClass: 
     (Smalltalk at: name string withBlanksTrimmed asSymbol 
        ifAbsent: [^self])

(It's not clear "network" has to be an argument, because the component
would have such an aspect already if it inherited from
"ConnectableMorph".)

You inspect the text morph and make sure its name is "#text". 
You inspect the button and make it send a #click message (to the
network) when clicked.
You go to any component in the network, and pick "save network in class"
and you select "ButtonAndTextViaConnections" and a method #specification
is added to that class. By default, the superclass HiddenModelMorph will
use this specification to make these morphs when you do an open on the
class. 

The difference between this hypothetical system and the Morphic example
David set up is that without modifying the button, or
"ButtonAndTextViaConnections", I could now add another "hidden
component" connected to the network called ClickLogger, which had this
code:

ClickLogger>>network: network event: anEvent
  anEvent type == #click ifTrue: 
    [Transcript show: 'a #click was sent'; cr.].

So, this approach is more modular and more easily extensible than the
current Morphic event system. I could disconnect this click logger and
reattach it to some other Morphic application if I wanted at any time. I
believe to extend Dave's example would entail modifying the
"ButtonAndText" class further, to establish a new dependency on button,
or to modify the buttonPressed method, or to create an ad-hoc dispatcher
similar to the network idea. The network paradigm generalizes the ad-hoc
adding and removing of components which can communicate with each other.

Granted, though, this connection approach would have to modify
ButtonAndTextViaConnections by saving a new specification if I wanted
this change to be reflected when a new ButtonAndTextViaConnections was
opened.  It would also fail if I connected two such networked morphs
together as either button would launch a browser using the contents of
only one of the texts. (This might be solved by an alternative kind of
"insulating" connector or the more specific events mentioned above, or
some other approach, perhaps like setting subnetwork ids when parts are
created which can be checked when event handling).

Here is a hypothetical non-GUI builder approach using this sort of
connection/network system.

Evaluated in a workspace:
  |builder|
  builder := ConnectionApplicationBuilder 
     with: #button 
     with: #text 
     with: #(rectangle enclosing).
  builder component: #text on: #click send: 
     [:widget | Browser newOnClass: 
       (Smalltalk at: (widget network find: #text) 
            contents string withBlanksTrimmed asSymbol ifAbsent:
[^nil])].

The first line creates a builder that uses a (hypothetical) layout
manager to build the morphs from types in common sizes. The second line
defines the button behavior.

Of these three apporaches, I would tend to do the middle one for
production work, because anonymous blocks for actions tend to be harder
to maintain than methods. 
I would occasionally do the last one for dialogs, but I would tend to
have the blocks send messages to classes or named singleton instances
(again, for maintainability, since blocks otherwise need to be replaced
to get the new behavior when the dialogs are running). I would tend to
do the first approach for exploratory prototyping.

Comments appreciated, especially if I am reinventing the wheel. I
haven't looked at the latest Morphic scripting, so perhaps that supports
something like this? Also, I haven't looked at "ThingLab" in a long time
-- is this similar to it?

-Paul Fernhout
Kurtz-Fernhout Software 
=========================================================
Developers of custom software and educational simulations
Creators of the Garden with Insight(TM) garden simulator
http://www.kurtz-fernhout.com





More information about the Squeak-dev mailing list