Morphic 3.0: The future of the Gui

Igor Stasenko siguctua at gmail.com
Thu Aug 30 22:16:39 UTC 2007


On 30/08/2007, Juan Vuletich <juan at jvuletich.org> wrote:
> I think the problem is well stated. I understand arguments on both
> sides. It's hard to make a decision...
>
Well, i don't think we lose any features with this. There are couple
of ways how to get around the problem, like using special caching
canvas(es).
I think you know why i'm against any operations like getting contents
of screen for using in effects. In case of using OpenGL, for instance,
you actually can obtain the pixel data but as you may know, its very
slow operation and not recommended for use in rendering cycles. This
operation is used only to get screenshots mainly, but not for
rendering effects.
And it sometimes faster and better to redraw given portion of screen ,
by issuing commands second time rather than capture pixel data in
buffer.
Also, please note , that such feature, like getting pixel data is
mainly accessible for display devices. Supporting these ops with
printers or PostScript files or network canvases will require a huge
amount of memory, not saying that it can be incorrect and totally
ineffective.
And different distortion effects is possible to render only if you use
device only for drawing  bitmaps, but suppose that i issuing command
to printer , like draw string of text with specific font. I don't need
to distort it, because printer uses own font glyphs and rasterising
them by hardware, while printing on paper.
Of course you can rasterise glyphs before sending them to printer,
this can be more 'compatible' approach, but very inefficient: you then
forced to rasterise whole document and send it to printer as one big
blob bitmap.
And then we have plotters, on which we can draw using only vectors. If
we support only pixel devices , so we bury possibility to rendering on
vector devices. Or any other devices which simply don't support pixel
rendering.


> Juan Vuletich
> www.jvuletich.org
>
> Igor Stasenko wrote:
> > On 30/08/2007, Joshua Gargus <schwa at fastmail.us> wrote:
> >
> >> Squeak currently includes a fisheye morph that provides a distorted
> >> view of what is underneath it.  What you have written sounds like it
> >> would not support such morphs.  If I'm misunderstanding you, could
> >> you try to restate your thought to give me another chance to understand?
> >>
> >>
> > Well, i think you understood correctly. Morph can't rely on results of
> > previous drawings.
> > There are some morphs, like magnifying lens which use current state of
> > screen to draw effects.
> > But suppose that you rendering a lens morph in PostScript file, or on
> > canvas which translates drawings to the network, or you using a HUGE
> > drawing surface which simply cannot fit in operative memory. This is a
> > simple reasons why morph should not access any current state.
> > To get around the problem you can, instead redraw a portion of world
> > using your own (applied before redrawing) transformations/effects.
> >
> >
> >> Thanks,
> >> Josh
> >>
> >>
> >> On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
> >>
> >>
> >>> Forgot to add..
> >>>
> >>> Morphs must not rely on any current display medium state such as
> >>> background color or results of previous drawings, because some media
> >>> types cannot hold or provide its current state in form, which can be
> >>> easily accessible or manipulated.
> >>> Any internal/current state of display medium can be accessible and
> >>> manageable only by its canvas object.
> >>>
> >>> --
> >>> Best regards,
> >>> Igor Stasenko AKA sig.
> >>>
> >>>
> >>
> >>
> >
> >
> >
>
>
>


-- 
Best regards,
Igor Stasenko AKA sig.



More information about the Squeak-dev mailing list