oo hardware (was: Why so few garage processors?)

Swan, Dean Dean_Swan at Mitel.COM
Fri Mar 21 19:50:22 UTC 2003


This prompts a few of comments from me:

	1) FPGA to ASIC conversion has been readily available
	   for a long time, and it does offer the benefits that
	   Dan mentions, BUT... it still involves fairly high
	   NREs in the neighborhood of $20k to $100k (USD), and
	   minimum orders of at least 5000 pieces.  The per piece
	   cost will vary widely depending on packaging and testing.
	   Bare untested dies will be the cheapest and packaged
	   fully tested chips will be the most expensive.

	   I don't think the ASIC route is realistic unless it's
	   for commercial (for profit) purposes.

	2) While today's high end C chips are quite good, they are
	   all basically the same architecture: multilayer cached
	   superscalar von Neumann machines.  The single bus is a
	   fairly big choke point limiting the performance.

	   A simple variation on this (i.e. a dual bus Harvard
	   architecture) can easily yield a dramatic performance
	   improvement.  Simply using separate memory buses for
	   code and data can do amazing things for throughput.

	   This could be why nearly every commercial DSP uses this
	   architecture.  I'm sure there are other architectural
	   ideas that could be explored as well.  My point is that
	   while today's high end CPUs are quite good, they are
	   limited by their history.


	3) Regarding distributed OOP:  I think an obvious solution
	   is one CPU per object, with one or more shared high speed
	   communication path(s) to send messages and return results.

	   This could be reasonably explored on FPGA based hardware.
	   Using a *really* simple CPU design, you could fit a lot of
	   them on today's million gate FPGAs.

	   I have often said that I would rather have a lot of slower
	   CPUs than one really fast one.  If the human brain can do
	   all the wonderful things it does with a peak signal
	   frequency of around 1 KHz, there must be something to this
	   massive parallelism concept.

	   As Alan mentioned, Fuchs was really on to something with
	   his pixel processor idea.  Too bad it hasn't caught on.

	   Sadly, computer science has paid terribly little attention
	   massive parallelism.  Rumelhart and McClelland made some
	   noise about this back in the mid-80's, but it's remained a
	   niche field.  Then there was the Transputer, Hillis's
	   Connection Machine and others, but we've never really
	   developed good tools to write parallel programs.  The
	   closest we have that is widely used is VHDL or Verilog.

	   (This just caused an odd thought - How about a Smalltalk
	    to digital logic compiler?  Does that make any sense?

	    After all, any CPU based architecture is always going to
	    be sub-optimal compared to equivalent random logic.)

							-Dean

> -----Original Message-----
> From: Dan Ingalls [mailto:Dan at SqueakLand.org]
> Sent: Thursday, March 20, 2003 7:58 PM
> To: The general-purpose Squeak developers list
> Subject: Re: oo hardware (was: Why so few garage processors?)
> 
> 
> Jecel Assumpcao Jr <jecel at merlintec.com>  wrote...
> 
> >Now suppose you can get a competitive solution working on an FPGA. If
> >somebody comes up with a better idea, just patch your VHDL and
> >recompile! And you get to ride Moore's law like all the big 
> guys since
> >faster and cheaper FPGAs will be coming out every year.
> 
> I'm sure Jecel knows this but, to those who have just learned 
> what FPGAs are and who are excited about them, the 
> double-whammy is that, once you have one that works, there is 
> a whole industry built around the ability to take just about 
> any FPGA pattern, and "compile" it into *real* gate arrays.  
> These tend to be much more compact (so you can make even 
> bigger ones), to run 5-20 times faster, and to use less power as well.
> 
> The other thing I have to say on this thread is a bit against 
> the flow.  Namely, today's screaming C chips are not all that 
> bad for OOP.  Consider what the best JIT VMs do, and ask 
> yourself how much better you think you can do.  You really 
> have to get specific about what you think is needed and how 
> much time you think it would save.  Given how well a good JIT 
> can do, it may make more sense to put those architectural 
> brain cells to work on the problem of distributing OOP in a 
> manner that many processors can work together.  (Or finishing 
> Jitter 5 ;-).
> 
> 	- Dan
> 



More information about the Squeak-dev mailing list