[squeak-dev] Subcanvas

Igor Stasenko siguctua at gmail.com
Fri Jul 4 02:16:28 UTC 2008


2008/7/4 Michael van der Gulik <mikevdg at gmail.com>:
>
>
> On Fri, Jul 4, 2008 at 12:10 PM, Igor Stasenko <siguctua at gmail.com> wrote:
>>
>> 2008/7/4 Michael van der Gulik <mikevdg at gmail.com>:
>>
>>
>> > Features of it are:
>> > * Some canvases can have child canvases, each with a z-index. These
>> > could be
>> > used, e.g., to implement movable windows, sprites, clipped scrollable
>> > areas,
>> > or flyweight graphics. This will use the underlying graphics system's
>> > capabilities.
>> >
>> mmm.. i like this idea in general, but please, lets make it more
>> general: no z-index (or any early binding to coordinate system).
>> Simply child canvas concept.
>
> A parent canvas could have multiple children. When the Canvas architecture
> wants to render these, it needs to know the distance each child is from the
> shared parent. You also need to know the distance between child and parent
> if you want to add relection, shadows and lighting in the OpenGL version
> :-D.
>

Lets keep GL things aside. Its up to developer how to render
reflections and what distance(s) come to play with his techniques.
And scheme you proposing better fit to a layers concept, not
child-parent relations. Maybe you should introduce layers then?

>
>>
>> > * An event handling system will also be part of this package. Mouse
>> > events
>> > will have a canvas (or sub-canvas) coordinate; keyboard events will be
>> > sent
>> > to the canvas that has the "keyboard focus".
>> >
>>
>> Please don't. An event subsystem should not be connected directly with
>> canvases.
>> It should be a separate layer for applications.
>> Any coordinate translations should come this way: Event ->
>> morph(widget) -> canvas.
>> But never Event->canvas.
>>
>> Suppose you moving a scrollbar knob. For this you would need 2
>> different coordinates in result:
>> - one to update hand position
>> - second to update knob position
>>
>> Or, suppose you dragging something in 3D space. You may move mouse to
>> the left or right, but movements will be translated in different way
>> (dragging object(s) closer/farther from eye).
>>
>> It is up to morphs/UI how to deal with events and then how to update
>> themselves on screen as a reaction on such event.
>
> Every Canvas has it's own coordinate system; they can be positioned anywhere
> on the screen, but still have (0 at 0) in their bottom-left corner. This means
> that mouse-based events with a position are relevant only for a particular
> Canvas.
>
Screen? Who said that Canvas draws on screen? And who said that you
have a mouse?

> What I was considering doing was making the Canvas the source of events.
> Every Canvas has a model which must implement event handling methods and a
> #drawOn:bounds: method. A Canvas can ask the model to redraw itself when the
> Canvas becomes dirty (e.g. when sub-canvases move and the canvas has no
> cached state).
>

A dirty/clean is a not a basic canvas capability.
Needless to say, that for some devices (including GL) sometimes its
easier and faster to redraw everything from scratch rather than care
about dirty areas.  Some devices (like printers) have nothing to do
with dirty/clean approach.
Don't let a premature optimizations influence the basic model! :)

> I've implemented a scroll bar using this kind of system. The scroll bar just
> needs to remember where the original mouseDown event was. I don't understand
> what your point was here.
>

The point is , that you may never know what portions of screen need to
be updated as a reaction on mouseDown (or any other) event.
I can write a simple code which updates an opposite point of screen to
where mouse located. Or i can write a code which writes a character $A
in file each time you clicking a mouse. I don't see how and why canvas
should take part in event handling.

> As with dragging things in 3-D space, I'll need to invent some way of making
> mouse capture secure.
>

Right, also, don't forget about relative mouse pointer motion. A good
illustration of capturing a relative mouse movement is 3D first person
shooter game :) It is not interesting where mouse cursor is, its only
interested in amount of mouse movement along its two axises.
And in fact, mouse, as device generates relative events, is knows
nothing about screen size , or where mouse cursor are allowed to be.
So, binding mouse to a screen space is wrong by its nature. Event
should generate a relative movement, and then World (or top-level
handler) can translate such events to absolute coordinates in its own
space (if it cares).

> Do you still think this is a bad design?
>
>
>>
>> > I don't know how to handle fonts - I don't know what the pros/cons of
>> > having
>> > a font API built in to the canvas is, or whether it is better to have
>> > the
>> > font drawing done externally by each application.
>> >
>>
>> Lets discuss that a bit, before you going to start implementing it.
>> Recently, we discussed a lot of ideas with Gary about canvases/events.
>> I think you should be aware of what conclusions we had, at least.
>> Gary, can you refresh my memory about ordinates & events ideas we
>> discussed? :)
>
>
> This is why I posted here :-).
>
> IRC logs would be good, if they can be found.
>
> Gulik.
>
>
> --
> http://people.squeakfoundation.org/person/mikevdg
> http://gulik.pbwiki.com/
>
>
>



-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list