Canvas architecture

Michael van der Gulik mikevdg at gmail.com
Thu Jan 31 20:51:00 UTC 2008


On Feb 1, 2008 7:21 AM, Igor Stasenko <siguctua at gmail.com> wrote:

> On 31/01/2008, Bert Freudenberg <bert at freudenbergs.de> wrote:
>
> > I am beginning to understand your point :) Yes, having that power in
> > the base system would be cool. I still think it can be implemented on
> > latest-gen OpenGL hardware (which can do the non-linear transform and
> > adaptively tesselate curves to pixel resolution) but that then would
> > be just an optimization.
> >
>
> What i'm against, is to bind rendering subsystem to specific hardware.
> There should be a layer, which should offer a rendering services to
> application, and number of layers to deliver graphics to device(s).
> In perfect, it should be able to render itself using any device:
> screen or printer or remote (networked) canvas.
> There also can be a different options in what a rendering media is:
> it's wrong to assume that rendering surface is planar (it can be a 3D
> holo-projector, for instance).
> What is hard, is to design such system to be fast and optimal and
> still be generic enough to be able to render anywhere.
>


For the holo-projector example, you need "architecture". For example,
consider this ASCII-art layered architecture for a GUI:

     Application
           |
     ToolBuilder
     /               \
2-D Widgets     3-D Widgets
  |                           |
Canvas               OpenGL or something
  |
BitBlt, Cairo, etc.

Of course, there's a lot more to it. I believe (and I'm putting words in
Juan's mouth here) that Morphic 3 is primarily a 2-D GUI.

In terms of hardware support, the Canvas class (currently used by Morphic
for drawing everything) needs to be rethought. I've got a preliminary brain
dump here: http://gulik.pbwiki.com/Canvas. Morphic 2 (i.e. in Squeak now)
isn't very smart about how it draws stuff; it's very slow. BitBlt is capable
of a lot more. Also, the underlying layers of architecture (BitBlt
particularly) aren't smart about rendering. The X Windows implementation of
Squeak for example (AFAIK) only uses a single bit-mapped "window". The X
Window system can do a lot more, such as vectored graphics and multiple
windows.

I suspect that the VNC implementation doesn't cache bitmaps on the client,
although this is pure speculation.

I would change Canvas by:

- Allowing a canvas to have movable sub-canvases. These would map 1:1 to
"windows" (i.e. drawable areas without borders, title bars) in the X window
system, or cached bitmaps in VNC, or display lists / textures in OpenGL.
These could be moved around the screen easily by only changing the location
of the sub-canvas.

- Canvases could be implemented as bitmaps or vectored graphics/display
lists; the application doesn't need to know what implementation is actually
used.

- Introduce a "needsRedraw" system of some sort. A Canvas implementation may
or may not cache its contents (as a bitmap or vectored graphics/display
list). Various implementations may discard the cached contents at times, or
perhaps not even cache content.

- Use micrometers rather than pixels as the unit of measurement and provide
a "pixelPitch" method to return the size of a pixel. For example, my screen
has a pixel pitch of 282 micrometers. A 600dpi printer would have a pixel
pitch of around 42 micrometers. You could use a SmallInteger to store
micrometer values.

- Introduce, somehow, an event system closely coupled to a Canvas (because
some events have coordinates relative to a canvas).

- Somehow support remotely cached bitmaps. I haven't thought about this yet.

Gulik.

-- 
http://people.squeakfoundation.org/person/mikevdg
http://gulik.pbwiki.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20080201/7e23791d/attachment.htm


More information about the Squeak-dev mailing list