Croquet alpha release(s?)
David A. Smith
davidasmith at bellsouth.net
Fri Dec 6 21:32:37 UTC 2002
Actually, the graphics chip manufacturers are moving in the direction of
general purpose massively parallel hardware. One of the goals of our
project is to tap into that kind of potential to go beyond graphics
computation. The VPU access is a start, but to be honest, the real win will
come later on. Our matrix extension is intended to add similar general
purpose capabilities to those available in APL, with real performance. This
kind of capability is not just needed for 3D graphics, but for any kind of
serious scientific simulation.
At 12:03 PM 12/6/2002 +0100, you wrote:
>On Fri, 6 Dec 2002, Joshua D Boyd wrote:
> > On Thu, Dec 05, 2002 at 03:03:08AM -0500, Joshua 'Schwa' Gargus wrote:
> > > Tell me more! Arbitrary n-D matrix operations will be hardware
> > > accelerated, yet accessed through a nice (extended) Smalltalk
> > > interface? Cool!
>Actually I think David was referring to Vector Processing Units. This is
>AltiVec on PowerPC, MMX/SSE on Intel, 3DNow! on AMD, etc (?). So the
>discussion goes a little bit in the wrong direction, but it's interesting
> > > I understand that consumer graphics cards generally have bad
> > > performance when it comes to reading data back from the card (ie:
> > > glReadPixels). Has/will this changed with the latest hardware (Radeon
> > > 9700 and NV30)?
>Not fundamentally, I think.
> > > Could you give a general idea of the bandwidth
> > > available? The reason I ask is that I can imagine speeding up things
> > > like genetic algorithms. Ideally, the objective function could be
> > > compiled to run on the graphics hardware, but realistically it would
> > > probably often require the CPU (please tell me I'm wrong). In this
> > > case, bandwidth from the VPU to the CPU becomes important.
>As I said, VPU means the on-processor vector units. Generally the graphics
>processor is referred to as "GPU". Only recently some companies (3Dlabs,
>ATI) have begun to name their boards "Visual Processing Units", probably
>to distinguish them from NVIDIA, and causing confusion with this.
> > I've had good luck with reading vertexs and matrices back from Geforce3
> > and Quadro4 cards. Never tried using glReadPixels.
>Graphics boards are optimized for getting data _to_ the screen, not back.
>In fact, there are "pure" modes where your application promises the
>graphics subsystem to never ever read back any state, which allows the
>driver to much more aggressively optimize its internal processing.
>OTOH, newer drivers have considerably sped up read-back rate for recent
>NVIDIA boards. It's just that hardware vendors put much more effort into
>optimizing the common pathes first. If you really need fast read-back
>capabilities you have to look in the professional market - one (if not the
>only) selling point of SGI machines today is _bandwidth_.
> > It used to be that the trick of getting good graphics out of a Indigo2
> > Impact was to push as much work onto the graphics subsystem, and
> > surprisingly, cache some of the state of the graphics subsystem to
> > minimize the number of glEnables and glDisables called. I can't say
> > that this is really the best thing to do these days.
>It still is.
More information about the Squeak-dev