Has anyone succesfully installed squeak on RH 8.0?

Jim.Gettys at hp.com Jim.Gettys at hp.com
Sat Jan 11 03:24:01 UTC 2003



> Sender: squeak-dev-admin at lists.squeakfoundation.org
> From: "Andreas Raab" <andreas.raab at gmx.de>
> Date: Sat, 11 Jan 2003 00:30:24 +0100
> To: <squeak-dev at lists.squeakfoundation.org>,
>        "'Bert Freudenberg'" <bert at isg.cs.uni-magdeburg.de>
> Subject: RE: Has anyone succesfully installed squeak on RH 8.0?
> -----
> Jim,
>
> > Depends what you are doing: if you are doing serious 3D: do OpenGL.
>
> Certainly true but...
>
> > The issue comes up in 2 D graphics though that it tends be painted
> > once, and the people stare at the results for a while.
>
> ... with a reasonable support on the platform this is not an issue
> anymore because you can update only the portions needed. However, what's
> even better is if you got support for both, render to texture and AGP
> client-side textures (Apple has both). In this case you can cache your
> "interesting" objects (aka everything that tends to change often) in a
> client side texture - being virtually unlimited by VRAM constraints,
> render as fast as you can do do "repaints" and still be able to do all
> sorts of "weird pixel manipulations" (since the texture is client-side
> you _can_ access the pixels, heh, heh - it's a little more expensive
> than usual but _way_ faster than copying the pixels from/to the graphics
> card). Unfortunately, the necessary extensions haven't made their way
> into the standard yet.

No matter how fast you hope to render into host memory, it will continue
to be much slower than if you can render in display memory, on screen
or off screen, on the graphics cards.  They have many more wires, and more
bandwidth by a large factor over what you have in the CPU.

Example: with the nice alpha compositing in software we now have, compared
with very early hardware assisted implementations,  the difference can
be that of a factor of 20 or 30 or more in performance.  This was actually
somewhat higher than we first expected...

The amazing thing is that current CPU's are "fast enough" to do it in
software for a large class of operations.

>
> [Which reminds me: What OpenGL version is supported "by default" these
> days on Linux?! Still at 1.2?]

Seems to be 1.2, at least on the Mesa version.

>
> > As I understand it, OpenGL will tend to have "interesting" artifacts
> > at the boundaries between triangles: these tend not to be noticed
> > in 3D applications due to the rapid repaint, but are often highly
> > objectionable in 2D apps.
>
> Where have you heard this?! I have been using techniques like this for
> many years and "boundaries between triangles" never were a problem (at
> least for planar configurations and compliant implementations). The only
> place where I have in fact noticed artifacts was due to clipping effects
> (round-off problems in particular if not done by the hardware - much of
> this has been solved when T&L hardware came along). This used to be a
> problem for partial redraws "back in the old days" when graphics cards
> didn't support stencil buffers. But these days you just use stencil
> buffers and they will clip on a per-fragment base and won't have any of
> those artifacts.

Whether or not this is really true is an interesting discussion.  When
Render really got started up 18 months ago, I did not see Alan Akin
dispute this accusation (who very much is a 3D person and participated
in alot of the discussions); but as I said, I'm not an openGL/3d person myself.
The intent was to do sane 2D graphics without falling down the slippery slope
to replicate huge amounts (or much of at all) what you find in OpenGL.

Another major issue is that devices like PDA's aren't going to see usable
OpenGL implementations in any finite time: the hardware lacks both floating
point and any way to get much hardware assist at all.  The approaches
being pursued in the Render work are, however, amenable to all platforms,
and are not FP intensive.

The limitations on PDA's and low end appliances won't change in the next 2-4
years, in my opinion.  I once optimistically predicted that by around 1992
we'd all have (at least) 16 bit truecolor displays; it took until about the
year 2000 for this to be mostly true.  I think we are in a similar situation
with 3D.  In this case, we missed that people would find 8 bit color
"good enough" and go for cheaper and cheaper.

Not only is this true for 3D, but batteries in these devices add another
huge constraint of little power.

My gut feel is that 3D will be similarly slow: on battery devices you
aren't just bound by memory costs, but also on power; this implies that
any 3D graphics chip just isn't going to happen by taking desktop chips and
putting them in PDA's.

Its going to be slow for serious 3D support to appear in that class of device,
much as it might be nice....

>
> > BTW, Keith Packard and Carl Worth are working on a client library for
> > rendering that will either push pixels or use Render when it is done,
> > you might think of using it it (along with the really georgeous spline
> > stuff they are doing) with different back ends.  This is to allow apps
> > to convert to a new API and not care if the server has render or not.
> > I suspect this is 6 months from completion.
>
> Interesting. Will there be versions for Windows, Mac etc. as well?!

If someone does it...  Given that one code path goes to pixels, the rendering
clearly has to switch between local pixels to be pushed vs. letting a
window system do it for you, so it should be doable to other window systems.

But I suspect some X'isms would be in the current API, though they are making
it look very postscriptish...  Things are still early enough to influence,
at this date.  Right now, they've got a very nice SVG viewer going on this
(mostly correct), and Keith's been putting up the xpdf program on it.

First priority is to complete this under X, to speed acceptance of the
new rendering model.  But if someone wants to see if it is feasible to
use in other environments, now is the time to float the idea.
                          - Jim


--
Jim Gettys
Cambridge Research Laboratory
HP Labs, Hewlett-Packard Company
Jim.Gettys at hp.com


More information about the Squeak-dev mailing list