I hesitate to go into the issues for lack of time, etc. I used to say that "HW is just SW crystallized early". The first computers didn't have index registers but did direct address modification. Some of the real disasters in personal computers were done because the hackers didn't want to use address spaces or the CPUs couldn't support them, etc.
Has no one noticed that graphics accelerators actually work? Etc. So HW choices make a huge difference.
The real question is: what is a really good computation model for the VHLL, and what of it could be made into special HW?
In my way of looking at objects and messages, they were supposed to be SW versions of how the ARPAnet and Internet were going to work: i.e. massive numbers, totally encapsulated, loosely coupled by pure messaging, and simultaneously active. Well designed hardware could make a huge difference on each of these compared to SW on a stupid CPU.
One of the several reasons we don't have a really decent OOPL today is the lack of suitable HW to run it.
The middle ground that was so successful at PARC which made up for the many things we didn't understand was the wonderful microcode architecture of the Alto and subsequent machines we designed and built. Today, FPGAs can fill some of this role.
Butler Lampson has estimated that today's architectures are about 1000 times less efficient pound for pound than the Alto. That's 10 doublings which equals 15 years of Moore's Law -- meaning that what can be computed today by all the straining, etc., could have been possible 15 years ago, and what we could do today will perhaps not be possible for another 15 years. This means that Moore's Law on a bad design doesn't make up for a good design of custom HW.
Just to pick one item -- suppose that instead of faking message passing with subroutine calls, the HW could enable real message passing. Or better yet, what if we could run a publish and subscribe with parameters matcher (like a modern version of LINDA) at the same speed as today's HW assisted subroutine calls? And what if these were between truly encapsulated objects that were the lowest level entity on the computer?
Cheers,
Alan
At 10:02 AM 5/24/2007, Ralph Johnson wrote:
On 5/24/07, Alan Kay alan.kay@squeakland.org wrote:
Hi Ralph --
I don't think either Ungar's architecture or the criticism of it below address the most important issues of making VHL object-oriented computer hardware (both then and now).
I would be interested in knowing what issues you think are the most important.
The thesis is Ungar's, but the architecture is not. I think the architecture was a group project by Patterson's students.
The point of the thesis is that a lot of proposed hardware features don't help performance very much, and in fact there are software solutions that are more effective. This is a good point. However, it doesn't mean that no hardware features can help. Moreover, it assumes that hardware is expensive and software is cheap, and in fact the opposite is true. People have been working on a JIT compiler for Squeak for some time, and we aren't using one yet. It is easy to say "just put it in the compiler", but it might be too complex to ever get the compiler working.
What do you think are the main issues?
-Ralph Johnson
On Friday 25 May 2007 12:39 am, Alan Kay wrote:
Just to pick one item -- suppose that instead of faking message passing with subroutine calls, the HW could enable real message passing.
Not really hardware, but QNX is a network-ready kernel that uses message passing exclusively. It proves that message-passing systems can be compact and efficient even on crufty :-) PCs. It is so compact that an entire dial-up internet appliance with GUI, vector graphics and network-ready browser was packed into a 1.44MB bootable floppy[1]!
[1] ftp://ftp.qnx.com/usr/free/qnx4/os/demos/misc/qnxdemo.tgz
Regards .. Subbu
squeak-dev@lists.squeakfoundation.org