Casimiro de Almeida Barreto
casimiro.barreto at gmail.com
Wed Oct 28 20:23:20 UTC 2009
Em 28-10-2009 15:24, Josh Gargus escreveu:
> I agree with Casmiro's response... GPUs aren't suitable for running
> Smalltalk code. Larrabee might be interesting, since it will have 16
> or more x86 processors, but it's difficult to see how to utilize the
> powerful vector processor attached to each x86.
Here I see two opportunities. The first would be to follow the advice of
mr. Ingalls and start to develop a generic VM and related classes to
deal with parallel processing (something I think is extremely delayed
since multicore processors are around for such a long time) and IMHO,
not dealing with SMP processing prevents dealing with NUMA processing
where the advantages of smalltalk should be astounding.
The second is to provide squeak with solid intrinsic vector processing
capabilities which would reopen the field of high performance
applications in science and engineering and also for more mundane
applications like game industry.
> Your question was more specifically about running something like Slang
> on it. It's important to remember that Slang isn't Smalltalk, it's C
> with Smalltalk syntax (i.e. all Slang language constructs are
> implemented by a simple 1-1 mapping onto the corresponding C language
> feature). So yes, it would be possible to run something like Slang on
> a GPU. Presumably, you would want to take the integration one step
> farther than with Slang, and automatically compile the generated
> OpenCL or CUDA code instead of dumping it to an external file.
> Instead of thinking of running Smalltalk on the GPU, I would think
> about writing a DSL (domain-specific language) for a particular class
> of problems that can be solved well on the GPU. Then I would think
> about how to integrate this DSL nicely into Smalltalk.
That's sort of my idea :)
I'm not considering CUDA at the moment because it would be more specific
to NVIDIA architecture. Currently the GPU market is shared mostly
between NVIDA and AMD/ATI and AMD says they won't support CUDA on their
GPUs (just follow
an example). It's a pitty since last year it was reported that RADEON
compatibility in CUDA was almost complete. Besides there are licensing
issues and I just don't want to have "wrappers".
It's obvious that I know many of the problems dealt by CUDA and OpenCL:
the variable number and size of pipelines, problems with numeric
representation and FP precision, etc... etc... etc... And I know it
would be much easier just to write some wrappers or, easier yet, to
develop things in C/C++ & glue them with FFI. But then, what would be
the gain to squeak & the smalltalk community?
> Sean McDermid has done something like this with C#, LINQ, HLSL, and
> Direct3D (http://bling.codeplex.com/). He's not doing GPGPU per se,
> but the point is how seamless is his integration with C#.
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 260 bytes
Desc: OpenPGP digital signature
Url : http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20091028/22d70900/signature.pgp
More information about the Squeak-dev