[squeak-dev] GPGPU

Christopher Hogan flipityskipit at hotmail.com
Thu Oct 29 16:04:02 UTC 2009


Interesting.

Does the HydraVM address some of those issues?

http://squeakvm.org/~sig/hydravm/devnotes.html

Chris Hogan




Date: Thu, 29 Oct 2009 13:14:20 -0200
From: casimiro.barreto at gmail.com
To: squeak-dev at lists.squeakfoundation.org
Subject: Re: [squeak-dev] GPGPU






  


Em 29-10-2009 09:40, Christopher Hogan escreveu:

  hmm,  
could you just plop the vm on top of Barrelfish and let it do all the
fancy multi-processor stuff for you?

  

http://www.linux-magazine.com/Online/News/Barrelfish-Multikernel-Operating-System-out-of-Zurich

  

http://www.barrelfish.org/

  

  


Nope.



Some issues: currently, in squeak, there's no way to have objects
running/living independently in different processors (meaning, among
other things, that squeakVM is a kind of "microkernel" with really
limited preemptivity & syncing mechanisms). If you think in
separate (disjunct) memory spaces things get even worse. Even when you
fork, processes are not "run independently" and if you really want to
run things independently you have to use things from CommandShell and
OSProcess/OSProcessPlugin and OS pipelines.



I think that VM should be re-engineered in order to allow instances
running in different processors/memory spaces and communicating via
some protocol. The challenges are big: syncing, security, performance
optimization, garbage collection, etc...



I thought to propose something like this for PhD (btw, I really did)
but people I know at university is so fond of Java & python &
other "small stuff" like "jackpot projects"... :( So, if I am to work
with this I have to find a way of funding myself through commercial
projects... perhaps leaving this entrepreneurial desert called BR.



Cheers,



CdAB



Chris Hogan
  

  

  

  

> Date: Wed, 28 Oct 2009 18:23:20 -0200

> From: casimiro.barreto at gmail.com

> To: squeak-dev at lists.squeakfoundation.org

> Subject: Re: [squeak-dev] GPGPU

> 

> Em 28-10-2009 15:24, Josh Gargus escreveu:

> > I agree with Casmiro's response... GPUs aren't suitable for
running

> > Smalltalk code. Larrabee might be interesting, since it will
have 16

> > or more x86 processors, but it's difficult to see how to
utilize the

> > powerful vector processor attached to each x86.

> Here I see two opportunities. The first would be to follow the
advice of

> mr. Ingalls and start to develop a generic VM and related classes
to

> deal with parallel processing (something I think is extremely
delayed

> since multicore processors are around for such a long time) and
IMHO,

> not dealing with SMP processing prevents dealing with NUMA
processing

> where the advantages of smalltalk should be astounding.

> 

> The second is to provide squeak with solid intrinsic vector
processing

> capabilities which would reopen the field of high performance

> applications in science and engineering and also for more mundane

> applications like game industry.

> >

> > Your question was more specifically about running something
like Slang

> > on it. It's important to remember that Slang isn't Smalltalk,
it's C

> > with Smalltalk syntax (i.e. all Slang language constructs are

> > implemented by a simple 1-1 mapping onto the corresponding C
language

> > feature). So yes, it would be possible to run something like
Slang on

> > a GPU. Presumably, you would want to take the integration one
step

> > farther than with Slang, and automatically compile the
generated

> > OpenCL or CUDA code instead of dumping it to an external file.

> >

> > Instead of thinking of running Smalltalk on the GPU, I would
think

> > about writing a DSL (domain-specific language) for a
particular class

> > of problems that can be solved well on the GPU. Then I would
think

> > about how to integrate this DSL nicely into Smalltalk.

> 

> That's sort of my idea :)

> 

> I'm not considering CUDA at the moment because it would be more
specific

> to NVIDIA architecture. Currently the GPU market is shared mostly

> between NVIDA and AMD/ATI and AMD says they won't support CUDA on
their

> GPUs (just follow

>
http://www.amdzone.com/index.php/news/video-cards/11775-no-cuda-on-radeon
as

> an example). It's a pitty since last year it was reported that
RADEON

> compatibility in CUDA was almost complete. Besides there are
licensing

> issues and I just don't want to have "wrappers".

> 

> It's obvious that I know many of the problems dealt by CUDA and
OpenCL:

> the variable number and size of pipelines, problems with numeric

> representation and FP precision, etc... etc... etc... And I know it

> would be much easier just to write some wrappers or, easier yet, to

> develop things in C/C++ & glue them with FFI. But then, what
would be

> the gain to squeak & the smalltalk community?

> >

> > Sean McDermid has done something like this with C#, LINQ,
HLSL, and

> > Direct3D (http://bling.codeplex.com/). He's not doing GPGPU
per se,

> > but the point is how seamless is his integration with C#.

> >

> > Cheers,

> > Josh

> >

> Best regards,

> 

> CdAB

> 

  

  New Windows 7: Find the right PC for you. Learn more.
  

  


 		 	   		  
_________________________________________________________________
Windows 7: It helps you do more. Explore Windows 7.
http://www.microsoft.com/Windows/windows-7/default.aspx?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_evergreen3:102009
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/squeak-dev/attachments/20091029/909b70a1/attachment.htm


More information about the Squeak-dev mailing list