Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
As i can see, the current graphics classes (Canvas/DisplayScreen/Morph) is heavily relying on blitting operations and (wrongly) assume that canvases and/or their display mediums supporting them by default.
First of all, DisplayScreen is a subclass of Form which is a subclass of DisplayMedium. A DisplayMedium comment reads: I am a display object which can both paint myself on a medium (displayOn: messages), and can act as a medium myself. My chief subclass is Form.
This is wrong. I can give you tons of exapmles when display medium represents a hard copy (take a look at the paper on you desk) and can't be painted on other medium. The printer can be considered as such medium to which you can draw but can't draw a printed contents on other medium. And i strongly presume that the DisplayScreen is such kind of medium. Even more, not all mediums can be represented as rectangular area of pixels. Some of them can be presented as set of vectors (vector displays) or other forms of drawings, and even have non-rectangular display surface. And thats where we need a Canvas to draw on it.
The drawing process by its nature is one way road and assuming that we can freely draw between different mediums is totally wrong.
So, at abstract level we must think of a Medium to which we can draw by issuing commands through its Canvas. Nothing more.
To simulate drawings from one medium into another we can use caching canvas, which will collect all drawing commands and then pass them to another canvas of target medium.
All morphic drawing is built on top of this wrong assumption, which prevents me to make a nice and clean replacement of Display to allow morphs drawn using OpenGL.
For making this possible there is need to make changes in many places to conform different protocol(s).
And here is my proposals: Fix the DisplayMedium protocol by revoking assumption that it can be drawn on other medium. DisplayScreen class: - make it a subclass of DisplayMedium, not Form.
Canvas class: - remove methods which based on assumtion that canvas is drawing on top of Form. Refactor code which assumes such behavior.
Since canvas represents capabilities of DisplayMedium i think that first references to Canvas must appear there, and any drawing on medium must be a drawing using canvas received from #getCanvas message. The method #defaultCanvasClass must be removed. You cannot connect a canvas to different medium and cannot use two independent canvases for drawing on the same medium because this leads to breaking its current internal state. For caching/clipping purposes there are protocol in canvas itself, which can give you instances for such uses and since they refer to the main canvas, the risk of breaking the medium internal state is minimal.
For example, if morph wants to use a Form to draw itself into (as in Morph>>#imageForm:forRectangle:), then it must directly instantiate a Form and use #getCanvas to draw on it. If there is need in a form having same depth as screen - use Display>>depth. But note, message #depth for DisplayScreen must be considered as a hint, not a part of Form protocol.
In most cases, drawing in to independent form used for caching purposes, when resulting image is then _blitted_ on the screen. There is a class named CachingCanvas designed specially for such purposes, and showing 0 references in my 3.9 image. Make a conclusion :)
So, returning back to the needs pre-cached drawing i think the best way is to use Canvas protocol, so it can return an instance of CachingCanvas or its subclass to cache the incoming drawing operations.
The caching canvas MUST be used by morphs and fonts. Take a look at the TTCFont implementation, i cant look at it without pain.. What do you think, is this the best way of drawing true type glyphs using pre-cached bitmaps? The authors of true type package made a great job by providing TTF's for a squeak, but failed to make it in a really nice fashion when glyphs are coming to draw on the screen. This is another design flaw to which I would like to pay attention. The protocol between Canvas and AbstractFont is need to be redesigned in the way of using CachingCanvas for this purposes. Then there will be no need to introduce own bitmap cache what is currently done in TTCFont class. And moreover, in case of OpenGL a caching canvas is stored in video memory which greatly improves performance when you need to copy it on the screen. Displaying a string is simple set of commands which draw part(s) of cached canvas(es) on the main canvas. Comparing to Form(s), a caching canvas can store a list of commands issued to it, instead of bitmap. And this can reduce the memory usage and greatly improve speed, because of reducing an excessive blitting operations.
I'd like to discuss all above metioned with you :)
Meanwhile my current implementation of GLCanvas draws a World from 3 to 8 times faster than current Display. And i think it can be even faster. I think that performance can be increased to such level to make it possible to draw entire World in back buffer and still have decent frame rates.
Hi sig,
Make your code available, so we can study your ideas in detail and discuss them.
Cheers, Juan Vuletich
sig escribió:
Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
As i can see, the current graphics classes (Canvas/DisplayScreen/Morph) is heavily relying on blitting operations and (wrongly) assume that canvases and/or their display mediums supporting them by default.
First of all, DisplayScreen is a subclass of Form which is a subclass of DisplayMedium. A DisplayMedium comment reads: I am a display object which can both paint myself on a medium (displayOn: messages), and can act as a medium myself. My chief subclass is Form.
This is wrong. I can give you tons of exapmles when display medium represents a hard copy (take a look at the paper on you desk) and can't be painted on other medium. The printer can be considered as such medium to which you can draw but can't draw a printed contents on other medium. And i strongly presume that the DisplayScreen is such kind of medium. Even more, not all mediums can be represented as rectangular area of pixels. Some of them can be presented as set of vectors (vector displays) or other forms of drawings, and even have non-rectangular display surface. And thats where we need a Canvas to draw on it.
The drawing process by its nature is one way road and assuming that we can freely draw between different mediums is totally wrong.
So, at abstract level we must think of a Medium to which we can draw by issuing commands through its Canvas. Nothing more.
To simulate drawings from one medium into another we can use caching canvas, which will collect all drawing commands and then pass them to another canvas of target medium.
All morphic drawing is built on top of this wrong assumption, which prevents me to make a nice and clean replacement of Display to allow morphs drawn using OpenGL.
For making this possible there is need to make changes in many places to conform different protocol(s).
And here is my proposals: Fix the DisplayMedium protocol by revoking assumption that it can be drawn on other medium. DisplayScreen class: - make it a subclass of DisplayMedium, not Form.
Canvas class: - remove methods which based on assumtion that canvas is drawing on top of Form. Refactor code which assumes such behavior. Since canvas represents capabilities of DisplayMedium i think that first references to Canvas must appear there, and any drawing on medium must be a drawing using canvas received from #getCanvas message. The method #defaultCanvasClass must be removed. You cannot connect a canvas to different medium and cannot use two independent canvases for drawing on the same medium because this leads to breaking its current internal state. For caching/clipping purposes there are protocol in canvas itself, which can give you instances for such uses and since they refer to the main canvas, the risk of breaking the medium internal state
is minimal.
For example, if morph wants to use a Form to draw itself into (as in Morph>>#imageForm:forRectangle:), then it must directly instantiate a Form and use #getCanvas to draw on it. If there is need in a form having same depth as screen - use Display>>depth. But note, message #depth for DisplayScreen must be considered as a hint, not a part of Form protocol. In most cases, drawing in to independent form used for caching purposes, when resulting image is then _blitted_ on the screen. There is a class named CachingCanvas designed specially for such purposes, and showing 0 references in my 3.9 image. Make a conclusion :) So, returning back to the needs pre-cached drawing i think the best way is to use Canvas protocol, so it can return an instance of CachingCanvas or its subclass to cache the incoming drawing operations. The caching canvas MUST be used by morphs and fonts. Take a look at the TTCFont implementation, i cant look at it without pain.. What do you think, is this the best way of drawing true type glyphs using pre-cached bitmaps? The authors of true type package made a great job by providing TTF's for a squeak, but failed to make it in a really nice fashion when glyphs are coming to draw on the screen. This is another design flaw to which I would like to pay attention. The protocol between Canvas and AbstractFont is need to be redesigned in the way of using CachingCanvas for this purposes. Then there will be no need to introduce own bitmap cache what is currently done in TTCFont class. And moreover, in case of OpenGL a caching canvas is stored in video memory which greatly improves performance when you need to copy it on the screen. Displaying a string is simple set of commands which draw part(s) of cached canvas(es) on the main canvas. Comparing to Form(s), a caching canvas can store a list of commands issued to it, instead of bitmap. And this can reduce the memory usage and greatly improve speed, because of reducing an excessive blitting operations.
I'd like to discuss all above metioned with you :)
Meanwhile my current implementation of GLCanvas draws a World from 3 to 8 times faster than current Display. And i think it can be even faster. I think that performance can be increased to such level to make it possible to draw entire World in back buffer and still have decent frame rates.
i didn't changed a bit in current classes, but made own, which working only on windows by now and requires recompiling of squeak binary, because i need some extra functionality from OS. if you interesting in reviewing my code, i can send it to you, just tell me where. but for testing it you need to rebuild squeak...
I simply dont want make changes to other classes because i dont want to make they working only for my own purposes. i want to disscuss the design and hear others. And if most of you agree - i can start making things.
On 20/04/07, Juan Vuletich juan@jvuletich.org wrote:
Hi sig,
Make your code available, so we can study your ideas in detail and discuss them.
Cheers, Juan Vuletich
On Apr 20, 2007, at 7:40 AM, sig wrote:
Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
Mmm I'll assume you looked at http://wiki.squeak.org/squeak/3862 Areithfa Ffenestri (Welsh for platform windows) is an architecture for supporting multiple host platform windows within Squeak.
-- ======================================================================== === John M. McIntosh johnmci@smalltalkconsulting.com Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ======================================================================== ===
thanks for the link. i read package description in package collection. if you want i can criticise this :)
Window updating - again, you architecture is blt-centric: more blitting - more slow down the system :)
Managing multiple OS windows is beyond this topic. It simply adds another Display instance and/or canvas. There no conflicts to things which i'm proposing.
In bug-reports i noted that having a single global instance of Display is a bad way. There would be much better having a multiple instances (each with own canvas) and manager, which controlling them. But this leads to rethinking of what World is. Is it a single for entire image, and controls somehow its parts for each display or we must have different World for each Display? I can't answer this question yet :)
On 20/04/07, John M McIntosh johnmci@smalltalkconsulting.com wrote:
On Apr 20, 2007, at 7:40 AM, sig wrote:
Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
Mmm I'll assume you looked at http://wiki.squeak.org/squeak/3862 Areithfa Ffenestri (Welsh for platform windows) is an architecture for supporting multiple host platform windows within Squeak.
Related to Ffenestri, doesn't the Squeak VM try to open the normal window first and then get told not to or something like that? How hard would it be to have the display part of the code be a class that is configurable at start time so that the image could start in Ffenestri, some other display or even headless without doing the extra work?
From: sig siguctua@gmail.com Reply-To: The general-purpose Squeak developers listsqueak-dev@lists.squeakfoundation.org To: johnmci@smalltalkconsulting.com, "The general-purpose Squeak developers list"squeak-dev@lists.squeakfoundation.org Subject: Re: Morphic graphics, Displaying fonts & canvases Date: Fri, 20 Apr 2007 19:58:33 +0300
thanks for the link. i read package description in package collection. if you want i can criticise this :)
Window updating
- again, you architecture is blt-centric: more blitting - more slow
down the system :)
Managing multiple OS windows is beyond this topic. It simply adds another Display instance and/or canvas. There no conflicts to things which i'm proposing.
In bug-reports i noted that having a single global instance of Display is a bad way. There would be much better having a multiple instances (each with own canvas) and manager, which controlling them. But this leads to rethinking of what World is. Is it a single for entire image, and controls somehow its parts for each display or we must have different World for each Display? I can't answer this question yet :)
On 20/04/07, John M McIntosh johnmci@smalltalkconsulting.com wrote:
On Apr 20, 2007, at 7:40 AM, sig wrote:
Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
Mmm I'll assume you looked at http://wiki.squeak.org/squeak/3862 Areithfa Ffenestri (Welsh for platform windows) is an architecture for supporting multiple host platform windows within Squeak.
_________________________________________________________________ Download Messenger. Join the iÂ’m Initiative. Help make a difference today. http://im.live.com/messenger/im/home/?source=TAGHM_APR07
At this moment i can't say anything about that. For my own vision, there must be a some kind of manager (maybe Ffenestri) which creates and manages OS windows (note, that windows can be without graphical front-end) it can be a typical text console. This might be helpful in many reasons. But no more than that. GUI - must be morphic or MVC. I'm against using OS windows for creating/controlling GUI. This not giving any advances comparing to morphic, but makes you even more dependent from OS.
On 20/04/07, J J azreal1977@hotmail.com wrote:
Related to Ffenestri, doesn't the Squeak VM try to open the normal window first and then get told not to or something like that? How hard would it be to have the display part of the code be a class that is configurable at start time so that the image could start in Ffenestri, some other display or even headless without doing the extra work?
On Apr 20, 2007, at 11:18 AM, J J wrote:
Related to Ffenestri, doesn't the Squeak VM try to open the normal window first and then get told not to or something like that? How hard would it be to have the display part of the code be a class that is configurable at start time so that the image could start in Ffenestri, some other display or even headless without doing the extra work?
Well that depends on the VM, currently when the VM runs non-headless at some point it cheerfully opens a window. When this happens is a VM issue. The macintosh VM would not open the window until the first draw to the Display occurs. This decision was done about 10 years back because the original written code would open the window, but Squeak would take many seconds on 25Mhz 68030 machines to get around to drawing leaving you with a white window.
The side effect of this is when people make changes to morphic and seriously break it one can open an image on the mac and not get a window because the drawing code is never executed. However this then blocks the cmd-'.' because the keyboard handlers are not installed since that requires a window.
On *other* platforms I believe the window would open, and cmd-'.' would work.
3.8.17b1 which I have not released yet has a info.plist setting that decides if the image must explicitly open the main window, versus auto-opening. It also allows you to close the main window, and later reopen the main window, or have the image decide exactly when the window should be opened.
For the mac carbon VM there is other complexity because when using as the base for the browser plugin ,it runs without a window, but when you switch to full screen, it must then come to the foreground, and install a window and install the keyboard/mouse handlers. Later when you switch out of full screen it hides the window and switchs the browser to the foreground which then accepts the keyboard/mouse events and passes them to the VM for interpretation.
-- ======================================================================== === John M. McIntosh johnmci@smalltalkconsulting.com Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com ======================================================================== ===
On 20-Apr-07, at 11:18 AM, J J wrote:
Related to Ffenestri, doesn't the Squeak VM try to open the normal window first and then get told not to or something like that? How hard would it be to have the display part of the code be a class that is configurable at start time so that the image could start in Ffenestri, some other display or even headless without doing the extra work?
When we wrote Ffenestri we changed the system so that the default window only opens if the forceToScreen (or sometihng like that, memory fails me right now) is called. The idea is that if you write a startup script (for example) that never actually does a display then no window will be created.
In answer to sig's point about Ffenestri still being bitblt focussed - well, yes I guess it is but that is because the system as a whole is blit centred. Nothing prevents you from using the Ffenestri calls to open and manipulate windows without copying a bitmaps to them. If yo uwant to add a way to get some OS identifier for the window to pass to Cairo/openGL/whatever then feel free. It's open source, after all. Don't forget the Cairo interface plugin/classes either; a lot of work has been done but much remains. You might make much faster progress by building on what is already there.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful Latin Phrases:- Cave ne ante ullas catapultas ambules = If I were you, I wouldn't walk in front of any catapults.
In answer to sig's point about Ffenestri still being bitblt focussed
- well, yes I guess it is but that is because the system as a whole
is blit centred. Nothing prevents you from using the Ffenestri calls to open and manipulate windows without copying a bitmaps to them. If yo uwant to add a way to get some OS identifier for the window to pass to Cairo/openGL/whatever then feel free. It's open source, after all. Don't forget the Cairo interface plugin/classes either; a lot of work has been done but much remains. You might make much faster progress by building on what is already there.
Wow.. a ton of work being done. I didn't forgot, i just didn't knew ;) In a RomePlugin i see that most primitives doing all hard work on VM level, where all plugins do. I dont think that this is the way of doing things. My choice is FFI calls since OpenGL is present by default on most modern platforms. This gives me much control and freedom on what i can and what i can't do in squeak, without writing extra code / installing additional plugins with tons of functions. I dont know how its easy to develop and debug plugins for squeak but i think its harder than just run smalltalk code. I wrote a small plugin fith a few functions which just enable OpenGL context for main window, and from my experience writing something more than that will require huge amount of time and it then hard to maintain and expand.
And since any drawing using hardware acceleration leads to calls to OpenGL/DirectX/(your favorite) stuff anyways i see no point why i should not use them directly from squeak.
I have long expereince writing programs on C/C++ and came to squeak because i need something better than that. But writing plugins which require more than a few lines of code is the way of returning back to C. why do we need squeak at all then?
On 20-Apr-07, at 1:15 PM, sig wrote:
Wow.. a ton of work being done. I didn't forgot, i just didn't knew ;)
No special reason you should know about everything that is going on.
In a RomePlugin i see that most primitives doing all hard work on VM level, where all plugins do. I dont think that this is the way of doing things.
Using FFI is fine, especially while you're experimenting. Once you know what you want to do it can be very advantageous to use a plugin because it is significantly faster at talking to the external code. It can also be a convenient way to hide platform differences; take a look at the platform specific code for the Ffenestri stuff for RISC OS and OSX for an example. Most plugins are really simple to write and building them is rarely problematic.
Do whatever works for you; solving your problem is the important thing. Extending Ffenestri to give you the window handle ought to be trivial and would save you redoing a bunch of stuff.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- She's a screensaver: Looks good, but useless.
On 21/04/07, tim Rowledge tim@rowledge.org wrote:
On 20-Apr-07, at 1:15 PM, sig wrote:
Wow.. a ton of work being done. I didn't forgot, i just didn't knew ;)
No special reason you should know about everything that is going on.
In a RomePlugin i see that most primitives doing all hard work on VM level, where all plugins do. I dont think that this is the way of doing things.
Using FFI is fine, especially while you're experimenting. Once you know what you want to do it can be very advantageous to use a plugin because it is significantly faster at talking to the external code. It can also be a convenient way to hide platform differences; take a look at the platform specific code for the Ffenestri stuff for RISC OS and OSX for an example. Most plugins are really simple to write and building them is rarely problematic.
Do whatever works for you; solving your problem is the important thing. Extending Ffenestri to give you the window handle ought to be trivial and would save you redoing a bunch of stuff.
Well, for this i'll need to encapsulate all OpenGL API into primitives. If this can improve speed and there is some automated way to do this - i dont see a reason why dont do this. I'm not a maniac to make this manually :)
I need API in front of me, because i want all its capabilities being accessible to Morphs. So Morphs can render themselves in 3D or 2D or mix both. There will be some rules they should not cross mainly is to ensure that commands they pass to OpenGL will let other morphs be drawn correctly. Imagine a morph which acts as container for others and all what he do is just removes saturation component from color. Then his child morphs are shown in grayscale, but other morphs are drawn as usual.
You are looking at the problem from the wrong angle. If you want to change the way morphs are drawn you need to be looking at replacing Canvas, not DisplayMedium.
Canvases generally make little to no assumptions about their medium; they use abstract drawing commands (lines, rectangles, polygons) instead of using BitBlt directly. They also encapsulate their backend (which may or may not be a DisplayMedium). Some canvases (like the one used by the remote Nebraska sharing protocol) don't even have DisplayMedium to operate on. Etc.
In short: If you look to change the way morphs are drawn, look at Canvas and its subclasses. DisplayMedium is the wrong place for that.
Cheers, - Andreas
sig wrote:
Hello squeak-dev,
during my struggle to make morphic world be OpenGL ready i found that current drawing protocols not providing a freedom in selecting a methods on how the text/images is drawn on display.
As i can see, the current graphics classes (Canvas/DisplayScreen/Morph) is heavily relying on blitting operations and (wrongly) assume that canvases and/or their display mediums supporting them by default.
First of all, DisplayScreen is a subclass of Form which is a subclass of DisplayMedium. A DisplayMedium comment reads: I am a display object which can both paint myself on a medium (displayOn: messages), and can act as a medium myself. My chief subclass is Form.
This is wrong. I can give you tons of exapmles when display medium represents a hard copy (take a look at the paper on you desk) and can't be painted on other medium. The printer can be considered as such medium to which you can draw but can't draw a printed contents on other medium. And i strongly presume that the DisplayScreen is such kind of medium. Even more, not all mediums can be represented as rectangular area of pixels. Some of them can be presented as set of vectors (vector displays) or other forms of drawings, and even have non-rectangular display surface. And thats where we need a Canvas to draw on it.
The drawing process by its nature is one way road and assuming that we can freely draw between different mediums is totally wrong.
So, at abstract level we must think of a Medium to which we can draw by issuing commands through its Canvas. Nothing more.
To simulate drawings from one medium into another we can use caching canvas, which will collect all drawing commands and then pass them to another canvas of target medium.
All morphic drawing is built on top of this wrong assumption, which prevents me to make a nice and clean replacement of Display to allow morphs drawn using OpenGL.
For making this possible there is need to make changes in many places to conform different protocol(s).
And here is my proposals: Fix the DisplayMedium protocol by revoking assumption that it can be drawn on other medium. DisplayScreen class: - make it a subclass of DisplayMedium, not Form.
Canvas class: - remove methods which based on assumtion that canvas is drawing on top of Form. Refactor code which assumes such behavior. Since canvas represents capabilities of DisplayMedium i think that first references to Canvas must appear there, and any drawing on medium must be a drawing using canvas received from #getCanvas message. The method #defaultCanvasClass must be removed. You cannot connect a canvas to different medium and cannot use two independent canvases for drawing on the same medium because this leads to breaking its current internal state. For caching/clipping purposes there are protocol in canvas itself, which can give you instances for such uses and since they refer to the main canvas, the risk of breaking the medium internal state
is minimal.
For example, if morph wants to use a Form to draw itself into (as in Morph>>#imageForm:forRectangle:), then it must directly instantiate a Form and use #getCanvas to draw on it. If there is need in a form having same depth as screen - use Display>>depth. But note, message #depth for DisplayScreen must be considered as a hint, not a part of Form protocol. In most cases, drawing in to independent form used for caching purposes, when resulting image is then _blitted_ on the screen. There is a class named CachingCanvas designed specially for such purposes, and showing 0 references in my 3.9 image. Make a conclusion :) So, returning back to the needs pre-cached drawing i think the best way is to use Canvas protocol, so it can return an instance of CachingCanvas or its subclass to cache the incoming drawing operations. The caching canvas MUST be used by morphs and fonts. Take a look at the TTCFont implementation, i cant look at it without pain.. What do you think, is this the best way of drawing true type glyphs using pre-cached bitmaps? The authors of true type package made a great job by providing TTF's for a squeak, but failed to make it in a really nice fashion when glyphs are coming to draw on the screen. This is another design flaw to which I would like to pay attention. The protocol between Canvas and AbstractFont is need to be redesigned in the way of using CachingCanvas for this purposes. Then there will be no need to introduce own bitmap cache what is currently done in TTCFont class. And moreover, in case of OpenGL a caching canvas is stored in video memory which greatly improves performance when you need to copy it on the screen. Displaying a string is simple set of commands which draw part(s) of cached canvas(es) on the main canvas. Comparing to Form(s), a caching canvas can store a list of commands issued to it, instead of bitmap. And this can reduce the memory usage and greatly improve speed, because of reducing an excessive blitting operations.
I'd like to discuss all above metioned with you :)
Meanwhile my current implementation of GLCanvas draws a World from 3 to 8 times faster than current Display. And i think it can be even faster. I think that performance can be increased to such level to make it possible to draw entire World in back buffer and still have decent frame rates.
On 20/04/07, Andreas Raab andreas.raab@gmx.de wrote:
You are looking at the problem from the wrong angle. If you want to change the way morphs are drawn you need to be looking at replacing Canvas, not DisplayMedium.
The canvas is 'The tool' with which you can draw on medium. Given a medium you need a canvas to draw on it. Is this correct? I can't imagine a canvas without medium. Can you imagine a file without file system? File system establishes rules on how its organized internally, while for outer world all file systems have common protocols like open file , write to it, close it. As for display medium: we have to pass primitive commands to canvas and it acts as bridge between medium internal state and common protocol.
Canvases generally make little to no assumptions about their medium; they use abstract drawing commands (lines, rectangles, polygons) instead of using BitBlt directly. They also encapsulate their backend (which may or may not be a DisplayMedium). Some canvases (like the one used by the remote Nebraska sharing protocol) don't even have DisplayMedium to operate on. Etc.
you say no assumptions? then what is it:
Canvas>>form ^ Display
first it assumes that canvas have Form, second it assumes that this form is a current Display. do you want more examples?
Canvases, in one way or another, must have connection to their medium - they must know how to control it, e.g. what commands to send to draw line/circle on it e.t.c. But Morphs cannot even think about how or where they draw themselves beyond Canvas protocol. And can't even think about retreiving some kind of form. As i said before - drawing is one way road! Take a PostScript canvas which generating a PS file from commands you pass to it. What the meaning of sending #form to it? Do the postscript file have pixel at 0@0 , can you read its value?
In short: If you look to change the way morphs are drawn, look at Canvas and its subclasses. DisplayMedium is the wrong place for that.
I dont know if you read my mail to the end. I implemented GLCanvas which is a subclass of Canvas. And why do think im talking about DisplayMedium? Because currently Canvas can't give me a ways of fully controlling drawing operations for Display. Thats why.
sig wrote:
On 20/04/07, Andreas Raab andreas.raab@gmx.de wrote:
You are looking at the problem from the wrong angle. If you want to change the way morphs are drawn you need to be looking at replacing Canvas, not DisplayMedium.
The canvas is 'The tool' with which you can draw on medium. Given a medium you need a canvas to draw on it. Is this correct?
That depends on whether you mean the question philosophically or practically. Philosophically you're right, but practically, there is really no need for a DisplayMedium.
I can't imagine a canvas without medium. Can you imagine a file without file system? File system establishes rules on how its organized internally, while for outer world all file systems have common protocols like open file , write to it, close it.
Again, depends on whether you want to be practical or philosophical. Practically speaking I have often dealt with files without spending a single thought on the file system. In many cases the files we pass around are encapsulated enough that you really cant't tell whether that "file" comes from a file system or whether it's just a stream of bytes. The same goes for Canvas and DisplayMedium; canvas is a fairly high-level abstraction such that its interface can be implemented in various different ways, some of which may not require a display medium at all.
Canvases generally make little to no assumptions about their medium; they use abstract drawing commands (lines, rectangles, polygons) instead of using BitBlt directly. They also encapsulate their backend (which may or may not be a DisplayMedium). Some canvases (like the one used by the remote Nebraska sharing protocol) don't even have DisplayMedium to operate on. Etc.
you say no assumptions? then what is it:
I said "little to no" assumptions which in English means that there may be some assumptions but that they are not that important. You found one of those, correct, but this is no contradiction of my statement.
Canvas>>form ^ Display
first it assumes that canvas have Form, second it assumes that this form is a current Display. do you want more examples?
Only if they are relevant to this discussion. I'm sure there are a few more places where such dependencies exist but they are easy to fix if one wanted to (I know this because I have implemented canvases that didn't use a display medium).
Canvases, in one way or another, must have connection to their medium
- they must know how to control it, e.g. what commands to send to draw
line/circle on it e.t.c. But Morphs cannot even think about how or where they draw themselves beyond Canvas protocol. And can't even think about retreiving some kind of form. As i said before - drawing is one way road! Take a PostScript canvas which generating a PS file from commands you pass to it. What the meaning of sending #form to it? Do the postscript file have pixel at 0@0 , can you read its value?
Sending #form would me meaningless (and the PSCanvas should consequently answer nil) but it could still answer the color at 0@0 if that were required. It would be slow, yes, but it could be done merely by executing the drawing commands and recording the intersections with 0@0 (depending on your tradeoffs that may be perfectly acceptable).
In short: If you look to change the way morphs are drawn, look at Canvas and its subclasses. DisplayMedium is the wrong place for that.
I dont know if you read my mail to the end. I implemented GLCanvas which is a subclass of Canvas. And why do think im talking about DisplayMedium?
Well, to be honest I'm not sure ;-) You seem to making more of a philosophical point about "how things ought to be" but if you have practical issues maybe we should discuss those instead?
Because currently Canvas can't give me a ways of fully controlling drawing operations for Display. Thats why.
Then maybe we should talk about the concrete problems you have. Like, what are you trying to do and where does Display get in the way? We have done a lot of this in Croquet in the past and I'm pretty sure I could give you a bit of advice on addressing the problems you have.
Cheers, - Andreas
On 21/04/07, Andreas Raab andreas.raab@gmx.de wrote:
That depends on whether you mean the question philosophically or practically. Philosophically you're right, but practically, there is really no need for a DisplayMedium.
The difference liying is when at philosophical level you stating one, but in practice you follow different rules. This leads to things when you code become unmanageable and which is hard to improve.
I can't imagine a canvas without medium. Can you imagine a file without file system? File system establishes rules on how its organized internally, while for outer world all file systems have common protocols like open file , write to it, close it.
Again, depends on whether you want to be practical or philosophical. Practically speaking I have often dealt with files without spending a single thought on the file system. In many cases the files we pass around are encapsulated enough that you really cant't tell whether that "file" comes from a file system or whether it's just a stream of bytes. The same goes for Canvas and DisplayMedium; canvas is a fairly high-level abstraction such that its interface can be implemented in various different ways, some of which may not require a display medium at all.
Ok i follow your point. Then we can define Canvas as intermediate object which pipelines a primitive commands to some kind of medium. One way or another canvas in its working state must be connected to its medium to make possible to produce anything. Yes, you can fully encapsulate meduim in canvas and dont rely on other objects/classes to produce output but tell me then what will happen when you create two different canvases to draw on the same medium and when you can't control the order in which drawing commands will be issued? Imagine two canvases open same postscript file or printer and write there without knowing about each other. And its obvious, that to prevent such behaviour we need an abstract class which represents a single unique medium and then it can control the ways of how and when you can draw on it, simply by answering a #getCanvas message. Maybe this is a different angle, but for DisplayScreen its the right angle. Correct me if im wrong.
Only if they are relevant to this discussion. I'm sure there are a few more places where such dependencies exist but they are easy to fix if one wanted to (I know this because I have implemented canvases that didn't use a display medium).
But for my purposes i need to replace Display with own instance. There is no way how to avoid that. I need all the drawings be redirected to OpenGL. And if you know, stating that OpenGL display medium are rectangular area of pixels leads to great limitations and drawbacks which will prevent me from using OpenGL at full throttle. Of course i can fix this. But this will be not elegant solution because of wrongly designed interactions protocol between canvas and display medium.
Then maybe we should talk about the concrete problems you have. Like, what are you trying to do and where does Display get in the way? We have done a lot of this in Croquet in the past and I'm pretty sure I could give you a bit of advice on addressing the problems you have.
There are many of them. I can't remember all of them now but for the future i will know where to ask :)
Take a look at senders of #defaultCanvasClass and its implementation the most frequent its use is: Display defaultCanvasClass extent: extend depth: depth
currently i have no choice but return a FormCanvas, but for future i think to remove this method at all and change its users to use caching canvas.
Something like this:
cache := Display getCanvas asCachingCanvas
..issue drawings to cache ..
cache atSourceRect: rect displayAt: aPoint
I managed to make somewhat stable GLDisplayScreen. A GLCanvas class currently fully supports drawings using Canvas protocol, but only basic things from BalloonCanvas protocol. For most morphs Canvas protocol is enough to draw: all menus/system browsers shown correctly without any artifacts.
Now about problems. As i said before, the problem lies in different approach for preserving state in OpenGL while drawing on screen. While using blitting drawings its ok to copy canvas for preserving clipping/transforming states in OpenGL its nearly impossible task. Consider following example:
Most morphs when drawing itself doing following: - drawing self - (optionally) clipping current area by its boundaries - (optionally) translating position - (optionally) draws submorphs.
A compatible approach:
aCanvas clipBy: rect during: [ ... draw submorphs .. ] aCanvas translateBy: rect during: [ ... draw submorphs ... ]
An incompatible approach: | tempCanvas | tempCanvas := aCanvas copyOffset: translateRect. submorphs do: [ :morph | tempCanvas drawMorph: morph ]
In first example its easy to preserve states: i can simply do transformation/clipping before entering block and then when it finished revert them back.
In second sample things are different: when i creating a temporary canvas i should copy all state from original canvas plus additional transformation/clipping and draw all future commands under this new state, while in original canvas i must draw without any changes.
This leads to fully reloading OpenGL state (transform matrix/clipping/flags/e.t.c) each time a new draw command issued, because i can't presume when tempCanvas will be in use and when the original canvas. I can't even presume the order at which they will be drawn on screen.
So as long as its possible i started changing the code in morphs to use first approach instead of second which doing same things but allows me managing states accurately in OpenGL.
I my vision a canvas must represent a full state of display medium. I see little areas where state preserving during copying can be handled correctly. Most of devices/drawing protocols are built on idea of sequential pipeline of commands. And if you don't follow these rules you get a white snow in result.
Btw , BalloonCanvas class especially introduces new #preserveStateduring: method which must be used by anyone who wants to draw correctly on screen.
Some notes about clipping: i do clipping using stencil buffer. for those who don't know what is it consider a single pixel is consists of RGB or RGBA components. By adding stencil component, you receive RGB+S or RGBA+S - this is additional bits for each pixel in buffer for special operations like clipping. And since OpenGL stores it on per-pixel basis and drawing to it can be done by regular draw commands we can have any forms of clipping area, not just rectangular. So, we can introduce a new clipping methods like this one for example:
Canvas drawClipArea: aBlock drawContents: aBlock.
in fact, the default clipBy:during: is the simplified version of above - its draws a rectangle in pixel buffer, but modifies only S component keeping the color components unchanged. Then second block draws in color components while keeps S component unchanged. If you noticed, preserving the clipping state will require an enormous amount of time and space - you need to copy S component for each pixel to do that.
Do you even need to clip (except in possibly a few special cases) if each element has, or can be rationally assigned, a z-depth value? I admit to not being familiar with Squeak at the depth (pun intended) you are coding but it seems to me that it could be avoided, depending on your ultimate objective being to use ogl to render everything. Whatever your method or motives keep up the good work, I'm looking forward to the results.
On 4/24/07, sig siguctua@gmail.com wrote:
I managed to make somewhat stable GLDisplayScreen. A GLCanvas class currently fully supports drawings using Canvas protocol, but only basic things from BalloonCanvas protocol. For most morphs Canvas protocol is enough to draw: all menus/system browsers shown correctly without any artifacts.
Now about problems. As i said before, the problem lies in different approach for preserving state in OpenGL while drawing on screen. While using blitting drawings its ok to copy canvas for preserving clipping/transforming states in OpenGL its nearly impossible task. Consider following example:
Most morphs when drawing itself doing following:
- drawing self
- (optionally) clipping current area by its boundaries
- (optionally) translating position
- (optionally) draws submorphs.
A compatible approach:
aCanvas clipBy: rect during: [ ... draw submorphs .. ] aCanvas translateBy: rect during: [ ... draw submorphs ... ]
An incompatible approach: | tempCanvas | tempCanvas := aCanvas copyOffset: translateRect. submorphs do: [ :morph | tempCanvas drawMorph: morph ]
In first example its easy to preserve states: i can simply do transformation/clipping before entering block and then when it finished revert them back.
In second sample things are different: when i creating a temporary canvas i should copy all state from original canvas plus additional transformation/clipping and draw all future commands under this new state, while in original canvas i must draw without any changes.
This leads to fully reloading OpenGL state (transform matrix/clipping/flags/e.t.c) each time a new draw command issued, because i can't presume when tempCanvas will be in use and when the original canvas. I can't even presume the order at which they will be drawn on screen.
So as long as its possible i started changing the code in morphs to use first approach instead of second which doing same things but allows me managing states accurately in OpenGL.
I my vision a canvas must represent a full state of display medium. I see little areas where state preserving during copying can be handled correctly. Most of devices/drawing protocols are built on idea of sequential pipeline of commands. And if you don't follow these rules you get a white snow in result.
Btw , BalloonCanvas class especially introduces new #preserveStateduring: method which must be used by anyone who wants to draw correctly on screen.
Some notes about clipping: i do clipping using stencil buffer. for those who don't know what is it consider a single pixel is consists of RGB or RGBA components. By adding stencil component, you receive RGB+S or RGBA+S - this is additional bits for each pixel in buffer for special operations like clipping. And since OpenGL stores it on per-pixel basis and drawing to it can be done by regular draw commands we can have any forms of clipping area, not just rectangular. So, we can introduce a new clipping methods like this one for example:
Canvas drawClipArea: aBlock drawContents: aBlock.
in fact, the default clipBy:during: is the simplified version of above
- its draws a rectangle in pixel buffer, but modifies only S
component keeping the color components unchanged. Then second block draws in color components while keeps S component unchanged. If you noticed, preserving the clipping state will require an enormous amount of time and space - you need to copy S component for each pixel to do that.
Yes Bert, I realised as I posted it was a stupid comment, just couldn't stop myself posting in time, lol. Manipulating the depth buffer would be a poor-mans clipping and inflexible. Thanks for correcting me.
On 4/24/07, Bert Freudenberg bert@freudenbergs.de wrote:
On Apr 24, 2007, at 11:45 , Derek O'Connell wrote:
Do you even need to clip (except in possibly a few special cases) if each element has, or can be rationally assigned, a z-depth value?
Imagine a scrolled text in a window. Yes, you need to clip.
- Bert -
Since i'm currently drawing a 2d scenes i don't neeed to clip by depth(Z) value. But of course for drawing any 3D object i will need it.
I'm still can't presume how it will look like when we introduce Z coordinate for morphs. I draw scene in orthographic projection, and drawing same figure with different Z values will not make any visual differences.
In future, a nice looking perspective projection can be introduced such that at Z = 0 your object's X,Y will be equal to screen X,Y. But with Z > 0 or Z<0 will be zoomed in/out relatively to the center of screen.
Since i'm currently drawing a 2d scenes i don't neeed to clip by depth(Z) value. But of course for drawing any 3D object i will need it.
Excuse me being pedantic but you are still drawing 3D, just flattened by the orthographic projection....
I draw scene in orthographic projection, and drawing same figure with different Z values will not make any visual differences.
Of course you can ignore or just throw away z-values but if different they do still affect the rendering, assuming you enable depth buffering. In fact you can get some funky overlapping effects by manipulating the depth buffer.
In future, a nice looking perspective projection can be introduced such that at Z = 0 your object's X,Y will be equal to screen X,Y. But with Z > 0 or Z<0 will be zoomed in/out relatively to the center of screen.
IMHO it is probably wise to plan for this from the start. For example, when you get to this stage will rotation be desired/allowed? If so, will rotation be restricted to one axis, ie, the z-axis? Makes sense otherwise rotation in the either of the other two axis means widgets dissappear when viewed edge-on unless artificial "thickness" created (as Croquet effectively does with it's "2D" windows) or billboarding used. Another consideration would be managing the extents of the set of objects (limited or infinite-ish space?) and efficiently managing many objects. Other issues would be mipmapping and/or LOD, plus maybe a not so obvious question of suspending objects at a distance? Many issues which you probably already address. Hope I don't overload you ;-)
By the way, my interests are entirely selfish :-) I hope one day to be freed from the constraints of a dumb 2D desktop and be able to organise myself in a 3D (probably 2.5D) space populated with intelligent objects!
Back to Berts example of a text editor: I accept the depth buffer is not the best way to do clipping. However I wonder if there is an alternative approach where clip-regions are implemented/achieved simply by using rendering-to-texture or FBOs if available? There may be additional benefits? I'm not up to date on OpenGL but regardless of approach I think some degree of gfx memory management will be needed, which leads to...
Are you concerned at the moment with the target spec of end-user machines? My interest in 3D environments reached it's peak maybe four years ago and then waned as manufacturers dragged their collective feet and failed to incorporate 3D hw as standard. The situation is not much better today but we can thank MS at least for upping the ante with Vista (I can't believe I'm saying that and, no, I personally have no intention of "upgrading" to Vista). But with plans to add 3D hw to mobile phones and the low cost of cards for desktops the future looks good. Downside is many people now choosing notebooks over desktops and unfortunately many still don't have 3D hw... personally I blame the users for not demanding it :-)
On 24/04/07, Derek O'Connell doconnel@gmail.com wrote:
Since i'm currently drawing a 2d scenes i don't neeed to clip by depth(Z) value. But of course for drawing any 3D object i will need it.
Excuse me being pedantic but you are still drawing 3D, just flattened by the orthographic projection....
of course. But by default i set canvas to this type of projection because most of morphs think that they draw to 2D canvas.
I draw scene in orthographic projection, and drawing same figure with different Z values will not make any visual differences.
Of course you can ignore or just throw away z-values but if different they do still affect the rendering, assuming you enable depth buffering. In fact you can get some funky overlapping effects by manipulating the depth buffer.
i'm aware of that. Enabling depth-test will be in the next stage.
In future, a nice looking perspective projection can be introduced such that at Z = 0 your object's X,Y will be equal to screen X,Y. But with Z > 0 or Z<0 will be zoomed in/out relatively to the center of screen.
IMHO it is probably wise to plan for this from the start. For example, when you get to this stage will rotation be desired/allowed? If so, will rotation be restricted to one axis, ie, the z-axis? Makes sense otherwise rotation in the either of the other two axis means widgets dissappear when viewed edge-on unless artificial "thickness" created (as Croquet effectively does with it's "2D" windows) or billboarding used. Another consideration would be managing the extents of the set of objects (limited or infinite-ish space?) and efficiently managing many objects. Other issues would be mipmapping and/or LOD, plus maybe a not so obvious question of suspending objects at a distance? Many issues which you probably already address. Hope I don't overload you ;-)
Such activities is beyond the Canvas abilities. For nice and effective 3D positioning and to restrict some operations (like limit rotating e.t.c) there must be some kind of 3D layout manager - a class which will establish these rules and makes sure that you follow them. It can be a World instance or some kind of Morph3DLayout which draws its submorphs in special fashion. Its very hard to control all aspects at Canvas level and in fact it can limit the features of its potential users. So the less restrictive Canvas will be - the better for all of us.
By the way, my interests are entirely selfish :-) I hope one day to be freed from the constraints of a dumb 2D desktop and be able to organise myself in a 3D (probably 2.5D) space populated with intelligent objects!
Same as you! Nothing can be better than 2D when you need a plain text reader/editor or windows e.t.c. And then if you need 3D, most desktop systems require to allocate a rectangular area to draw on it. My efforts directed to remove such ridiculos restriction and allow 2D and 3D objects coexist in harmony and be managed by same manager without extra distinctions.
Back to Berts example of a text editor: I accept the depth buffer is not the best way to do clipping. However I wonder if there is an alternative approach where clip-regions are implemented/achieved simply by using rendering-to-texture or FBOs if available? There may be additional benefits? I'm not up to date on OpenGL but regardless of approach I think some degree of gfx memory management will be needed, which leads to...
Yes, rendering to texture must be allowed. This is very useful when morph wants to cache of some of its output. (In fact, to support currently implemented shadows/draggable morph contents its the best way to do) But still from reasons i showed before, i can't allow to let anyone copy canvas. I'll better introduce a good caching protocol, so cached surfaces (NOT FORMS!!) will refer to main canvas and will have limited protocol, much more restrictive than Form is. For generic Canvas implementation we can't assume that cached surface is stored as area of pixels (it can be stored as set of commands by example), so including behavior which follows such presumption will be absurd.
premiliary the caching will look something like this: | cacheId | cache := aCanvas createCachedSurface: extent. aCanvas drawToSurface: cache during: [ ... ]. aCanvas drawCachedSurface: cache at: aPoint. " like drawImage"
its ok to store a 'cache' var as long as you need it. since all drawings to it is controlled by canvas there is no risk of breaking canvas internal state.
Are you concerned at the moment with the target spec of end-user machines? My interest in 3D environments reached it's peak maybe four years ago and then waned as manufacturers dragged their collective feet and failed to incorporate 3D hw as standard. The situation is not much better today but we can thank MS at least for upping the ante with Vista (I can't believe I'm saying that and, no, I personally have no intention of "upgrading" to Vista). But with plans to add 3D hw to mobile phones and the low cost of cards for desktops the future looks good. Downside is many people now choosing notebooks over desktops and unfortunately many still don't have 3D hw... personally I blame the users for not demanding it :-)
The target spec is the machine with OpenGL library which can be dynamically linked with squeak and used by FFI interface. about OpenGL version - this may differ. One assumption that i have now, that OpenGL implementation allows creating textures of non power of 2 dimensionality. Most current implementations of OpenGL (2.0 by default) support this feature. The second is rendering to textures. I think that this will be enough for my GLCanvas. Any extra OpenGL functionality like pixel/vertex shader programs can be used by morphs via OpenGL FFI optionally. But GLCanvas functionality will not depend from them. Or maybe provide some extra protocols which allow safely use such capabilities without risk of damaging internal state.
One might think that this is an already-solved problem. I don't know.
You might look for comparison at Morgan McGuire's 2D API for Curl, which supports a lot of the same goals. He stated these as extensible, object-oriented, immediate mode 2D graphics AND text. It facilitates retained and 3D API's above and OpenGL below. http://www.cs.brown.edu/~morgan/papers/CurlGraphicsThesis.doc Prof. McGuire later worked with John Hughes' group at Brown, so he may have learned something since then that would cause him to reconsider...
sig wrote:
On 24/04/07, Derek O'Connell doconnel@gmail.com wrote:
Since i'm currently drawing a 2d scenes i don't neeed to clip by
depth(Z) value.
But of course for drawing any 3D object i will need it.
Excuse me being pedantic but you are still drawing 3D, just flattened by the orthographic projection....
of course. But by default i set canvas to this type of projection because most of morphs think that they draw to 2D canvas.
I draw scene in orthographic projection, and drawing same figure with different Z values will not make any visual differences.
Of course you can ignore or just throw away z-values but if different they do still affect the rendering, assuming you enable depth buffering. In fact you can get some funky overlapping effects by manipulating the depth buffer.
i'm aware of that. Enabling depth-test will be in the next stage.
In future, a nice looking perspective projection can be introduced such that at Z = 0 your object's X,Y will be equal to screen X,Y. But with Z > 0 or Z<0 will be zoomed in/out relatively to the center of screen.
IMHO it is probably wise to plan for this from the start. For example, when you get to this stage will rotation be desired/allowed? If so, will rotation be restricted to one axis, ie, the z-axis? Makes sense otherwise rotation in the either of the other two axis means widgets dissappear when viewed edge-on unless artificial "thickness" created (as Croquet effectively does with it's "2D" windows) or billboarding used. Another consideration would be managing the extents of the set of objects (limited or infinite-ish space?) and efficiently managing many objects. Other issues would be mipmapping and/or LOD, plus maybe a not so obvious question of suspending objects at a distance? Many issues which you probably already address. Hope I don't overload you ;-)
Such activities is beyond the Canvas abilities. For nice and effective 3D positioning and to restrict some operations (like limit rotating e.t.c) there must be some kind of 3D layout manager - a class which will establish these rules and makes sure that you follow them. It can be a World instance or some kind of Morph3DLayout which draws its submorphs in special fashion. Its very hard to control all aspects at Canvas level and in fact it can limit the features of its potential users. So the less restrictive Canvas will be - the better for all of us.
By the way, my interests are entirely selfish :-) I hope one day to be freed from the constraints of a dumb 2D desktop and be able to organise myself in a 3D (probably 2.5D) space populated with intelligent objects!
Same as you! Nothing can be better than 2D when you need a plain text reader/editor or windows e.t.c. And then if you need 3D, most desktop systems require to allocate a rectangular area to draw on it. My efforts directed to remove such ridiculos restriction and allow 2D and 3D objects coexist in harmony and be managed by same manager without extra distinctions.
Back to Berts example of a text editor: I accept the depth buffer is not the best way to do clipping. However I wonder if there is an alternative approach where clip-regions are implemented/achieved simply by using rendering-to-texture or FBOs if available? There may be additional benefits? I'm not up to date on OpenGL but regardless of approach I think some degree of gfx memory management will be needed, which leads to...
Yes, rendering to texture must be allowed. This is very useful when morph wants to cache of some of its output. (In fact, to support currently implemented shadows/draggable morph contents its the best way to do) But still from reasons i showed before, i can't allow to let anyone copy canvas. I'll better introduce a good caching protocol, so cached surfaces (NOT FORMS!!) will refer to main canvas and will have limited protocol, much more restrictive than Form is. For generic Canvas implementation we can't assume that cached surface is stored as area of pixels (it can be stored as set of commands by example), so including behavior which follows such presumption will be absurd.
premiliary the caching will look something like this: | cacheId | cache := aCanvas createCachedSurface: extent. aCanvas drawToSurface: cache during: [ ... ]. aCanvas drawCachedSurface: cache at: aPoint. " like drawImage"
its ok to store a 'cache' var as long as you need it. since all drawings to it is controlled by canvas there is no risk of breaking canvas internal state.
Are you concerned at the moment with the target spec of end-user machines? My interest in 3D environments reached it's peak maybe four years ago and then waned as manufacturers dragged their collective feet and failed to incorporate 3D hw as standard. The situation is not much better today but we can thank MS at least for upping the ante with Vista (I can't believe I'm saying that and, no, I personally have no intention of "upgrading" to Vista). But with plans to add 3D hw to mobile phones and the low cost of cards for desktops the future looks good. Downside is many people now choosing notebooks over desktops and unfortunately many still don't have 3D hw... personally I blame the users for not demanding it :-)
The target spec is the machine with OpenGL library which can be dynamically linked with squeak and used by FFI interface. about OpenGL version - this may differ. One assumption that i have now, that OpenGL implementation allows creating textures of non power of 2 dimensionality. Most current implementations of OpenGL (2.0 by default) support this feature. The second is rendering to textures. I think that this will be enough for my GLCanvas. Any extra OpenGL functionality like pixel/vertex shader programs can be used by morphs via OpenGL FFI optionally. But GLCanvas functionality will not depend from them. Or maybe provide some extra protocols which allow safely use such capabilities without risk of damaging internal state.
On 25/04/07, Howard Stearns hstearns@wisc.edu wrote:
One might think that this is an already-solved problem. I don't know.
You might look for comparison at Morgan McGuire's 2D API for Curl, which supports a lot of the same goals. He stated these as extensible, object-oriented, immediate mode 2D graphics AND text. It facilitates retained and 3D API's above and OpenGL below. http://www.cs.brown.edu/~morgan/papers/CurlGraphicsThesis.doc Prof. McGuire later worked with John Hughes' group at Brown, so he may have learned something since then that would cause him to reconsider...
i'll examine it, thanks for the link.
meanwhile i posted some screenshots of my current works. take a look at http://computeradvenrutes.blogspot.com/ (i mistyped the blog name, it must be computeradventures, but i saved it under computeradvenrutes and seems there's no way to fix it :) ) Feel free to leave comments and questions.
squeak-dev@lists.squeakfoundation.org