<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Verdana
}
--></style>
</head>
<body class='hmmessage'>
hmm, could you just plop the vm on top of Barrelfish and let it do all the fancy multi-processor stuff for you?<br><br>http://www.linux-magazine.com/Online/News/Barrelfish-Multikernel-Operating-System-out-of-Zurich<br><br>http://www.barrelfish.org/<br><br><br><br>Chris Hogan
<br><br><br><br>> Date: Wed, 28 Oct 2009 18:23:20 -0200<br>> From: casimiro.barreto@gmail.com<br>> To: squeak-dev@lists.squeakfoundation.org<br>> Subject: Re: [squeak-dev] GPGPU<br>> <br>> Em 28-10-2009 15:24, Josh Gargus escreveu:<br>> > I agree with Casmiro's response... GPUs aren't suitable for running<br>> > Smalltalk code. Larrabee might be interesting, since it will have 16<br>> > or more x86 processors, but it's difficult to see how to utilize the<br>> > powerful vector processor attached to each x86.<br>> Here I see two opportunities. The first would be to follow the advice of<br>> mr. Ingalls and start to develop a generic VM and related classes to<br>> deal with parallel processing (something I think is extremely delayed<br>> since multicore processors are around for such a long time) and IMHO,<br>> not dealing with SMP processing prevents dealing with NUMA processing<br>> where the advantages of smalltalk should be astounding.<br>> <br>> The second is to provide squeak with solid intrinsic vector processing<br>> capabilities which would reopen the field of high performance<br>> applications in science and engineering and also for more mundane<br>> applications like game industry.<br>> ><br>> > Your question was more specifically about running something like Slang<br>> > on it. It's important to remember that Slang isn't Smalltalk, it's C<br>> > with Smalltalk syntax (i.e. all Slang language constructs are<br>> > implemented by a simple 1-1 mapping onto the corresponding C language<br>> > feature). So yes, it would be possible to run something like Slang on<br>> > a GPU. Presumably, you would want to take the integration one step<br>> > farther than with Slang, and automatically compile the generated<br>> > OpenCL or CUDA code instead of dumping it to an external file.<br>> ><br>> > Instead of thinking of running Smalltalk on the GPU, I would think<br>> > about writing a DSL (domain-specific language) for a particular class<br>> > of problems that can be solved well on the GPU. Then I would think<br>> > about how to integrate this DSL nicely into Smalltalk.<br>> <br>> That's sort of my idea :)<br>> <br>> I'm not considering CUDA at the moment because it would be more specific<br>> to NVIDIA architecture. Currently the GPU market is shared mostly<br>> between NVIDA and AMD/ATI and AMD says they won't support CUDA on their<br>> GPUs (just follow<br>> http://www.amdzone.com/index.php/news/video-cards/11775-no-cuda-on-radeon as<br>> an example). It's a pitty since last year it was reported that RADEON<br>> compatibility in CUDA was almost complete. Besides there are licensing<br>> issues and I just don't want to have "wrappers".<br>> <br>> It's obvious that I know many of the problems dealt by CUDA and OpenCL:<br>> the variable number and size of pipelines, problems with numeric<br>> representation and FP precision, etc... etc... etc... And I know it<br>> would be much easier just to write some wrappers or, easier yet, to<br>> develop things in C/C++ & glue them with FFI. But then, what would be<br>> the gain to squeak & the smalltalk community?<br>> ><br>> > Sean McDermid has done something like this with C#, LINQ, HLSL, and<br>> > Direct3D (http://bling.codeplex.com/). He's not doing GPGPU per se,<br>> > but the point is how seamless is his integration with C#.<br>> ><br>> > Cheers,<br>> > Josh<br>> ><br>> Best regards,<br>> <br>> CdAB<br>> <br>                                            <br /><hr />New Windows 7: Find the right PC for you. <a href='http://www.microsoft.com/windows/pc-scout/default.aspx?CBID=wl&ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_pcscout:102009' target='_new'>Learn more.</a></body>
</html>