[squeak-dev] Possible approaches for rendering in Morphic 3

Joshua Gargus schwa at fastmail.us
Sat Sep 6 23:16:54 UTC 2008


Juan Vuletich wrote:
> Hi Igor,
>
> Igor Stasenko wrote:
>> 2008/9/4 Juan Vuletich <juan at jvuletich.org>:
>>  
>>> Hi Folks,
>>>
>>> I've added this to my web:
>>> http://www.jvuletich.org/RenderingApproaches.html
>>> . It is about the different approaches for rendering of morphs for
>>> Morphic
>>> 3. This is the problem I'm dealing with now, so I'd appreciate any
>>> ideas or
>>> pointers.
>>>
>>>     
>>
>> This two approaches having same pros and cons as shading vs ray
>> tracing. Shading technique using filled triangles to draw graphics,
>> and similarily can override pixels multiple times. 
>
> Didn't know that. Thanks. (BTW, I'm no 3d expert, my background comes
> from Signal and Image Processing).
>
>> With ray tracing
>> each pixel color computed separately which enables to draw images of
>> high photorealistic quality. The issue is same - speed.
>> I don't think that per-pixel rendering is the way to go, because most
>> of hardware is not ready yet.
>> In 10 years situation will change, and we will have enough computing
>> power in our desktop computers to do that, but not now.
>>   
>
> May be you're right, but as I'm only rendering 2d objects, the
> computational cost should be much lower than for ray tracing. 

It depends on what kind of non-affine transforms you want to render
with, since this will determine the cost of each intersection-test.  For
arbitrary transforms, these interesection-tests can become arbitrarily
expensive.

> Besides, if Morphic 3 turns to be usable only with help from OpenCL or
> CUDA or some other special hardware, it is not too bad.

My first thought when I read your original post was "how is he planning
to take advantage of graphics hardware?".  In order to use OpenCL or
CUDA effectively, you need to set up a parallelized workload for the GPU
to munch through.  The implementation you describe ("for each pixel,
iterate over the morphs, starting at the one at the top, and going
through morphs behind it, etc...") requires a traversal of the Morphic
scene-graph for each pixel.  A CUDA program running on the GPU can't ask
Squeak for information about which morph is where... all of the GPU
processors will pile up behind this sequential bottleneck.  Do you have
some idea of how you would approach this?  It seems like you'd need to
generate (and perhaps cache) a CUDA-friendly data structure.

Stepping back, one of the main difficulties I have when thinking about
how rendering should work in Morphic 3 is that I don't really understand
what the end-user APIs will look like.  For example, suppose that I
program a graph that plots data against linear axes (each data point is
rendered as a circle).  If I want to instead plot data against
logarithmic axes, I can't simply render the whole graph with a different
transform, because the circles will no longer look like circles.  How
could this be avoided?  I can't see how. 

I think that the idea of rendering entire scenes with arbitrary,
non-linear transforms is very cool.  However, I don't see how it would
be very useful in practice (for reasons like the example above). 
Hopefully, I'm just missing something, and you'll be able to explain it
to me.

Cheers,
Josh

>
> Thanks for your comments!
> Cheers,
> Juan Vuletich
>




More information about the Squeak-dev mailing list