Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Cheers, Juan Vuletich www.jvuletich.org
1.2. Rendering.
I want high quality rendering on any Display, regardless of size or pixel resolution. Therefore, I need complete independence of those Display properties. The programmer must never deal with the concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased. But in order to be able to render equally well on very high resolution as on medium resolution devices, the objects to be rendered (i.e. morphs) must be specified in a way that doesn't depend on the resolution of the target device at all. The ultimate way to do this is by thinking of them as continuous functions. This applies to geometric shapes, but also to digital images (photos) and textures.
----
Very, very promising description. But maybe you should add this:
Morph rendering must not be dependent from any concrete display-medium properties including: shape, dimensions, internal organization and capabilities. A display medium for morph can be a display screen or printer or anything else, which provides canvas object. All morphs visualization must be presented as a stream of commands to canvas object. Different types of canvases will provide an interface for rendering morphs on particular display medium.
Also, zooming sometimes is not quite same as resizing. When resizing text window i making larger area for text, so i can see more lines of text instead of seeing same number of text lines but with higher resolution. So, i think that some morphs mush have both dimensions (in its own coordinate system) and transformation (relatively to parent morph coordinate system). And them resizing is changing dimensions, and zooming is changing transformation.
On 30/08/2007, Juan Vuletich juan@jvuletich.org wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Cheers, Juan Vuletich www.jvuletich.org
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
Squeak currently includes a fisheye morph that provides a distorted view of what is underneath it. What you have written sounds like it would not support such morphs. If I'm misunderstanding you, could you try to restate your thought to give me another chance to understand?
Thanks, Josh
On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
-- Best regards, Igor Stasenko AKA sig.
On 30/08/2007, Joshua Gargus schwa@fastmail.us wrote:
Squeak currently includes a fisheye morph that provides a distorted view of what is underneath it. What you have written sounds like it would not support such morphs. If I'm misunderstanding you, could you try to restate your thought to give me another chance to understand?
Well, i think you understood correctly. Morph can't rely on results of previous drawings. There are some morphs, like magnifying lens which use current state of screen to draw effects. But suppose that you rendering a lens morph in PostScript file, or on canvas which translates drawings to the network, or you using a HUGE drawing surface which simply cannot fit in operative memory. This is a simple reasons why morph should not access any current state. To get around the problem you can, instead redraw a portion of world using your own (applied before redrawing) transformations/effects.
Thanks, Josh
On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
-- Best regards, Igor Stasenko AKA sig.
I think the problem is well stated. I understand arguments on both sides. It's hard to make a decision...
Juan Vuletich www.jvuletich.org
Igor Stasenko wrote:
On 30/08/2007, Joshua Gargus schwa@fastmail.us wrote:
Squeak currently includes a fisheye morph that provides a distorted view of what is underneath it. What you have written sounds like it would not support such morphs. If I'm misunderstanding you, could you try to restate your thought to give me another chance to understand?
Well, i think you understood correctly. Morph can't rely on results of previous drawings. There are some morphs, like magnifying lens which use current state of screen to draw effects. But suppose that you rendering a lens morph in PostScript file, or on canvas which translates drawings to the network, or you using a HUGE drawing surface which simply cannot fit in operative memory. This is a simple reasons why morph should not access any current state. To get around the problem you can, instead redraw a portion of world using your own (applied before redrawing) transformations/effects.
Thanks, Josh
On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
-- Best regards, Igor Stasenko AKA sig.
On 30/08/2007, Juan Vuletich juan@jvuletich.org wrote:
I think the problem is well stated. I understand arguments on both sides. It's hard to make a decision...
Well, i don't think we lose any features with this. There are couple of ways how to get around the problem, like using special caching canvas(es). I think you know why i'm against any operations like getting contents of screen for using in effects. In case of using OpenGL, for instance, you actually can obtain the pixel data but as you may know, its very slow operation and not recommended for use in rendering cycles. This operation is used only to get screenshots mainly, but not for rendering effects. And it sometimes faster and better to redraw given portion of screen , by issuing commands second time rather than capture pixel data in buffer. Also, please note , that such feature, like getting pixel data is mainly accessible for display devices. Supporting these ops with printers or PostScript files or network canvases will require a huge amount of memory, not saying that it can be incorrect and totally ineffective. And different distortion effects is possible to render only if you use device only for drawing bitmaps, but suppose that i issuing command to printer , like draw string of text with specific font. I don't need to distort it, because printer uses own font glyphs and rasterising them by hardware, while printing on paper. Of course you can rasterise glyphs before sending them to printer, this can be more 'compatible' approach, but very inefficient: you then forced to rasterise whole document and send it to printer as one big blob bitmap. And then we have plotters, on which we can draw using only vectors. If we support only pixel devices , so we bury possibility to rendering on vector devices. Or any other devices which simply don't support pixel rendering.
Juan Vuletich www.jvuletich.org
Igor Stasenko wrote:
On 30/08/2007, Joshua Gargus schwa@fastmail.us wrote:
Squeak currently includes a fisheye morph that provides a distorted view of what is underneath it. What you have written sounds like it would not support such morphs. If I'm misunderstanding you, could you try to restate your thought to give me another chance to understand?
Well, i think you understood correctly. Morph can't rely on results of previous drawings. There are some morphs, like magnifying lens which use current state of screen to draw effects. But suppose that you rendering a lens morph in PostScript file, or on canvas which translates drawings to the network, or you using a HUGE drawing surface which simply cannot fit in operative memory. This is a simple reasons why morph should not access any current state. To get around the problem you can, instead redraw a portion of world using your own (applied before redrawing) transformations/effects.
Thanks, Josh
On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
-- Best regards, Igor Stasenko AKA sig.
That makes sense, thanks for the clarification.
Josh
On Aug 29, 2007, at 11:24 PM, Igor Stasenko wrote:
On 30/08/2007, Joshua Gargus schwa@fastmail.us wrote:
Squeak currently includes a fisheye morph that provides a distorted view of what is underneath it. What you have written sounds like it would not support such morphs. If I'm misunderstanding you, could you try to restate your thought to give me another chance to understand?
Well, i think you understood correctly. Morph can't rely on results of previous drawings. There are some morphs, like magnifying lens which use current state of screen to draw effects. But suppose that you rendering a lens morph in PostScript file, or on canvas which translates drawings to the network, or you using a HUGE drawing surface which simply cannot fit in operative memory. This is a simple reasons why morph should not access any current state. To get around the problem you can, instead redraw a portion of world using your own (applied before redrawing) transformations/effects.
Thanks, Josh
On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
Forgot to add..
Morphs must not rely on any current display medium state such as background color or results of previous drawings, because some media types cannot hold or provide its current state in form, which can be easily accessible or manipulated. Any internal/current state of display medium can be accessible and manageable only by its canvas object.
-- Best regards, Igor Stasenko AKA sig.
-- Best regards, Igor Stasenko AKA sig.
Thanks Igor,
I'll include some your text in the next draft! However I'm not sure yet about details like canvases. I like the current design, but I'm still not sure if I'll keep them.
WRT resizing and text, you are right. Some morphs will need a "resize" operation that is an external zoom with the inverse internal zoom to keep the aparent size of the text inside unchanged. I still haven't written about this, though.
Cheers, Juan Vuletich www.jvuletich.org
Igor Stasenko wrote:
1.2. Rendering.
I want high quality rendering on any Display, regardless of size or pixel resolution. Therefore, I need complete independence of those Display properties. The programmer must never deal with the concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased. But in order to be able to render equally well on very high resolution as on medium resolution devices, the objects to be rendered (i.e. morphs) must be specified in a way that doesn't depend on the resolution of the target device at all. The ultimate way to do this is by thinking of them as continuous functions. This applies to geometric shapes, but also to digital images (photos) and textures.
Very, very promising description. But maybe you should add this:
Morph rendering must not be dependent from any concrete display-medium properties including: shape, dimensions, internal organization and capabilities. A display medium for morph can be a display screen or printer or anything else, which provides canvas object. All morphs visualization must be presented as a stream of commands to canvas object. Different types of canvases will provide an interface for rendering morphs on particular display medium.
Also, zooming sometimes is not quite same as resizing. When resizing text window i making larger area for text, so i can see more lines of text instead of seeing same number of text lines but with higher resolution. So, i think that some morphs mush have both dimensions (in its own coordinate system) and transformation (relatively to parent morph coordinate system). And them resizing is changing dimensions, and zooming is changing transformation.
On 30/08/2007, Juan Vuletich juan@jvuletich.org wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Cheers, Juan Vuletich www.jvuletich.org
Hi Juan -
One thing I'm curious about is whether you consider this a research project or if you plan to produce a practical artifact. There is nothing wrong with the former of course but if you aim for a practical artifact I would reconsider some issues, like writing your own renderer. It can be done but it will take just as much time (if not more) than the rest of the architecture. Been there, done that ;-)
Cheers, - Andreas
Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Cheers, Juan Vuletich www.jvuletich.org
Hi Andreas,
This started as a practical artifact. I'm building an audio editor in a time-frequency domain based on the ideas from my thesis (all the stuff is at my page). And I could not start programming in Morphic without cleaning it first. This was the beginning of all my Morphic cleaning efforts, about 3 years ago. As I worked in Morphic I started to feel the urge to fix the problems I saw. I spent so much time thinking on them that I started to find better solutions to them, like general non-linear coordinate systems and modeling images as continuous functions that honor the conditions of the Sampling Theorem, to get proper anti aliasing simply via sampling. So, now, it's grown into both a research project and a practical artifact.
WRT the rendered, I still haven't started writing it. My last published image uses Balloon for rendering. (BTW, thanks for Balloon!) But, as you know, Balloon can not handle non linear coordinate transformation and images modeled as continuous functions. I don't know of any rendering engine that would suit my needs. So, I need to write a new one. Of course, if you or somebody else tells me about such renderer I'd be really happy to use it.
Cheers, Juan Vuletich www.jvuletich.org
Andreas Raab wrote:
Hi Juan -
One thing I'm curious about is whether you consider this a research project or if you plan to produce a practical artifact. There is nothing wrong with the former of course but if you aim for a practical artifact I would reconsider some issues, like writing your own renderer. It can be done but it will take just as much time (if not more) than the rest of the architecture. Been there, done that ;-)
Cheers,
- Andreas
Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Cheers, Juan Vuletich www.jvuletich.org
On Thu, Aug 30, 2007 at 12:14:03AM -0300, Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Nice. I just had one question: would gamma correction be taken into account at all?
Especially when rendering text, light-on-dark always looks bigger than dark-on-light for exactly the same shape. Is it possible to (in general) render a shape with constant "mass" in the face of gamma inconsistencies? For text, this would involve procedural bolding/thinning.
Just throwing that out.
On 30/08/2007, Matthew Fulmer tapplek@gmail.com wrote:
On Thu, Aug 30, 2007 at 12:14:03AM -0300, Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Nice. I just had one question: would gamma correction be taken into account at all?
Especially when rendering text, light-on-dark always looks bigger than dark-on-light for exactly the same shape. Is it possible to (in general) render a shape with constant "mass" in the face of gamma inconsistencies? For text, this would involve procedural bolding/thinning.
I think its more likely a capability of device/display medium. and morphs should not care about it. Its a canvas which should i think.
Just throwing that out.
-- Matthew Fulmer -- http://mtfulmer.wordpress.com/ Help improve Squeak Documentation: http://wiki.squeak.org/squeak/808
Hi Matthew,
It should. When time comes, of course.
Cheers, Juan Vuletich www.jvuletich.org
Matthew Fulmer wrote:
On Thu, Aug 30, 2007 at 12:14:03AM -0300, Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Nice. I just had one question: would gamma correction be taken into account at all?
Especially when rendering text, light-on-dark always looks bigger than dark-on-light for exactly the same shape. Is it possible to (in general) render a shape with constant "mass" in the face of gamma inconsistencies? For text, this would involve procedural bolding/thinning.
Just throwing that out.
On Aug 30, 2007, at 0:20 , Matthew Fulmer wrote:
On Thu, Aug 30, 2007 at 12:14:03AM -0300, Juan Vuletich wrote:
Hi Folks,
I started to write a "paper to be" about my Morphic 3.0 project. The objective is to convince you that Morphic 3.0 is the coolest thing around :). The first draft is available at www.jvuletich.org. I hope you enjoy it. Any comment is welcome.
Nice. I just had one question: would gamma correction be taken into account at all?
Especially when rendering text, light-on-dark always looks bigger than dark-on-light for exactly the same shape. Is it possible to (in general) render a shape with constant "mass" in the face of gamma inconsistencies? For text, this would involve procedural bolding/thinning.
Well, the Right Thing To Do might be rendering and compositing in linear color space and only applying gamma when pushing to the screen ... that means you have to have color components of higher resolution (16 bit fixed-point? 32 bit floats?) and/or do real super- sampling ... all of which is expensive. But would be cool to have (though most people wouldn't care and rather take performance over being "correct").
- Bert -
Juan Vuletich schrieb: " I want high quality rendering on any Display, regardless of size or pixel resolution. Therefore, I need complete independence of those Display properties. The programmer must never deal with the concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased. But in order to be able to render equally well on very high resolution as on medium resolution devices, the objects to be rendered (i.e. morphs) must be specified in a way that doesn't depend on the resolution of the target device at all. The ultimate way to do this is by thinking of them as continuous functions. This applies to geometric shapes, but also to digital images (photos) and textures. "
This will pose a problem with fonts - it was one of Fresco's (www.fresco.org) bigger issues when it came to the drawing model, as Fresco used a device independent layout, too.
Bitmap fonts are pixel based, so those wouldn't work properly (how to translate a font pixel into a pixel-less world in a resolution independent way, so that the font always looks good?)
Vector fonts have the issue that their features only incidentally will align to the pixel grid of your target device (when it comes down to rendering). (and compared to drawing, the human brain seems much more sensitive to that when it comes to text, maybe because it's more necessary detail per cm^2 than for the average drawing)
Either, you just handle fonts like everything else and anti-alias them. See Acrobat Reader 5 (or so) with small fonts to see how it looks - you'll get a mathematically correct, unreadable grey pixel soup.
Or you take hinting into account - but that forces you to think about a pixel grid again (hints help align glyph features to some grid), and you'll have to choose, which grid to take.
Unfortunately, you can't just defer that decision until it's rendered to the device, as hinting can (to some extend) change the size of a glyph, depending on its location and the target resolution (and neighbor glyphs, I think, so you couldn't just render them one-by-one to given locations).
Usually, the impact is minor, but you might get to the situation where a group of glyphs in a line get small enough due to changes in hinting that another word fits into their line (when it missed it before by just a minimum amount) and so you get to rewrap the whole text from that point (or ignore that by pinning text to fixed locations in some way, making that a special case)
Anyway, when taking text into account, more thought might be necessary on that move.
I'm very much in favor of a truly device independent graphic subsystem, though.
Regards, Patrick Georgi
Hi Patrick,
When I get to the point of "mathematically correct gray soup of text", I'll be happy! Then I'll start thinking on more advanced solutions.
Cheers, Juan Vuletich www.jvuletich.org
Patrick Georgi wrote:
Juan Vuletich schrieb: " I want high quality rendering on any Display, regardless of size or pixel resolution. Therefore, I need complete independence of those Display properties. The programmer must never deal with the concept of pixel. The GUI is thought at a higher level. All the GUI is independent of pixel resolution. All the rendering is anti aliased. But in order to be able to render equally well on very high resolution as on medium resolution devices, the objects to be rendered (i.e. morphs) must be specified in a way that doesn't depend on the resolution of the target device at all. The ultimate way to do this is by thinking of them as continuous functions. This applies to geometric shapes, but also to digital images (photos) and textures. "
This will pose a problem with fonts - it was one of Fresco's (www.fresco.org) bigger issues when it came to the drawing model, as Fresco used a device independent layout, too.
Bitmap fonts are pixel based, so those wouldn't work properly (how to translate a font pixel into a pixel-less world in a resolution independent way, so that the font always looks good?)
Vector fonts have the issue that their features only incidentally will align to the pixel grid of your target device (when it comes down to rendering). (and compared to drawing, the human brain seems much more sensitive to that when it comes to text, maybe because it's more necessary detail per cm^2 than for the average drawing)
Either, you just handle fonts like everything else and anti-alias them. See Acrobat Reader 5 (or so) with small fonts to see how it looks - you'll get a mathematically correct, unreadable grey pixel soup.
Or you take hinting into account - but that forces you to think about a pixel grid again (hints help align glyph features to some grid), and you'll have to choose, which grid to take.
Unfortunately, you can't just defer that decision until it's rendered to the device, as hinting can (to some extend) change the size of a glyph, depending on its location and the target resolution (and neighbor glyphs, I think, so you couldn't just render them one-by-one to given locations).
Usually, the impact is minor, but you might get to the situation where a group of glyphs in a line get small enough due to changes in hinting that another word fits into their line (when it missed it before by just a minimum amount) and so you get to rewrap the whole text from that point (or ignore that by pinning text to fixed locations in some way, making that a special case)
Anyway, when taking text into account, more thought might be necessary on that move.
I'm very much in favor of a truly device independent graphic subsystem, though.
Regards, Patrick Georgi
squeak-dev@lists.squeakfoundation.org