Canvas architecture

Igor Stasenko siguctua at gmail.com
Fri Feb 1 13:03:13 UTC 2008


On 31/01/2008, Michael van der Gulik <mikevdg at gmail.com> wrote:
>
>
> On Feb 1, 2008 7:21 AM, Igor Stasenko <siguctua at gmail.com> wrote:
> >
> > On 31/01/2008, Bert Freudenberg <bert at freudenbergs.de> wrote:
> >
> > > I am beginning to understand your point :) Yes, having that power in
> > > the base system would be cool. I still think it can be implemented on
> > > latest-gen OpenGL hardware (which can do the non-linear transform and
> > > adaptively tesselate curves to pixel resolution) but that then would
> > > be just an optimization.
> > >
> >
> > What i'm against, is to bind rendering subsystem to specific hardware.
> > There should be a layer, which should offer a rendering services to
> > application, and number of layers to deliver graphics to device(s).
> > In perfect, it should be able to render itself using any device:
> > screen or printer or remote (networked) canvas.
> > There also can be a different options in what a rendering media is:
> > it's wrong to assume that rendering surface is planar (it can be a 3D
> > holo-projector, for instance).
> > What is hard, is to design such system to be fast and optimal and
> > still be generic enough to be able to render anywhere.
> >
>
>
> For the holo-projector example, you need "architecture". For example,
> consider this ASCII-art layered architecture for a GUI:
>
>      Application
>            |
>      ToolBuilder
>      /               \
> 2-D Widgets     3-D Widgets
>   |                           |
> Canvas               OpenGL or something
>   |
> BitBlt, Cairo, etc.
>

I simply can't accept this.
GIU building architecture should be trivial and not spawned like tree:

Application
     |
ToolBuilder
     |
Widgets
     |
Canvas
    |
Device/Surface.

Why keeping 2D and 3D apart? What i like in Opengl, that it can handle
both 2D/3D drawing primitives, so there is no need in using another
library to make your content 3D aware.

As for Morphic3 - if we talking about coordinate systems: canvas
should accept drawing primitive commands in any coordinate system (be
it 1D, 2D, 3D , logarithmic or whatever).
Then, in uniform way it should translate drawing commands to those,
which are understood by device on which is drawing performed. No
separation is needed!

That's why i coded my small GLCanvas to show that there is no need in
having special context for drawing 3D in application. You can use same
interface for drawing different things be it 2D or 3D.
Moreover, you don't need to create a separate drawing context (such as
OS window) for drawing 3D widgets there. Following this way we will
have crappy applications with crappy architectural design.

> Of course, there's a lot more to it. I believe (and I'm putting words in
> Juan's mouth here) that Morphic 3 is primarily a 2-D GUI.
>
> In terms of hardware support, the Canvas class (currently used by Morphic
> for drawing everything) needs to be rethought. I've got a preliminary brain
> dump here: http://gulik.pbwiki.com/Canvas. Morphic 2 (i.e. in Squeak now)
> isn't very smart about how it draws stuff; it's very slow. BitBlt is capable
> of a lot more. Also, the underlying layers of architecture (BitBlt
> particularly) aren't smart about rendering. The X Windows implementation of
> Squeak for example (AFAIK) only uses a single bit-mapped "window". The X
> Window system can do a lot more, such as vectored graphics and multiple
> windows.
>
> I suspect that the VNC implementation doesn't cache bitmaps on the client,
> although this is pure speculation.
>

Well, there is a CachingCanvas in current morphic system. Too bad,
it's not used, and it feels to me that developers simply lose a point
why they should use CachingCanvas instead of creating Forms manually
for persisting intermediate drawing results.
So, they draw on Forms (and bound themselves with pixel blitting
operations), and then again, use blitting to draw these cached forms
on display surface.
More smarter canvas interfacing wouldn't hurt much :)

> I would change Canvas by:
>
> - Allowing a canvas to have movable sub-canvases. These would map 1:1 to
> "windows" (i.e. drawable areas without borders, title bars) in the X window
> system, or cached bitmaps in VNC, or display lists / textures in OpenGL.
> These could be moved around the screen easily by only changing the location
> of the sub-canvas.
>
> - Canvases could be implemented as bitmaps or vectored graphics/display
> lists; the application doesn't need to know what implementation is actually
> used.

Exactly, application should not assume that output surface is planar
and pixel based.
This should be handled at lower levels (canvas/device) and never show
up at application level.

>
> - Introduce a "needsRedraw" system of some sort. A Canvas implementation may
> or may not cache its contents (as a bitmap or vectored graphics/display
> list). Various implementations may discard the cached contents at times, or
> perhaps not even cache content.
>
Yes, cached content can be sent multiple times to device, but if we
talking about generic architecture, then you can't have any sorts of
redraws, because you can't redraw just printed page on a printer, you
only can draw new one :)
And i'm strongly for keeping this straight: once drawing command is
sent, there is no way back. You should not manipulate device state in
such manner, because many devices simply can't return back to previous
state, or it will take too much resources and time, so  this will be a
performance killer.

As for example, there is a LensMorph in squeak, which grabbing pixels
from screen and then transforming them to create fancy effect.
This should not be allowed. If you need to perform a 'post-draw'
effects, you need to cache intermediate drawing results somewhere, and
then issue commands to draw final results, but not manipulate with
pixels directly, because it's simply breaking the drawing chain on
device. And manual pixel manipulation can be much less effective
comparing to capabilities of device (such as video card).

> - Use micrometers rather than pixels as the unit of measurement and provide
> a "pixelPitch" method to return the size of a pixel. For example, my screen
> has a pixel pitch of 282 micrometers. A 600dpi printer would have a pixel
> pitch of around 42 micrometers. You could use a SmallInteger to store
> micrometer values.
>
> - Introduce, somehow, an event system closely coupled to a Canvas (because
> some events have coordinates relative to a canvas).
>
> - Somehow support remotely cached bitmaps. I haven't thought about this yet.
>

Simply issue command 'create cached canvas', then issue commands to
it. And then use cached canvas handle to manipulate with it's
contents. Again: no need to tie yourself with any sorts of bitmaps.
Compare the bandwidth you need for sending a Rect(0,0,1000,1000)
command and sending  1000*1000 bitmap.

> Gulik.
>
> --
> http://people.squeakfoundation.org/person/mikevdg
> http://gulik.pbwiki.com/
>


-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list