I am back from a nice vacation (my first since 1982...) and see that you have all been waiting patiently for me to return before starting any interesting discussions ;-)
One interesting side effect of this trip is that I now have a new FPGA development kit - the ML401 from Xilinx. It has some limitations (like the 50MHz video and soldered 64MB of SDRAM) compared to newer kits but makes a far nicer Smalltalk computer than what I had been using up to now. The FPGA is about twice as large as what I plan to use in the Merlin 6 board I am designing and the Virtex 4 FPGAs allow designs to run at twice the clock of what the low end Spartan 3 can deliver. For the next couple of months I will focus on this board so I will return the Spartan 3 Starter Kit that I had borrowed the my university.
Besides a FAQ I should probably set up a "Frequent Suggestions" page for my project so people don't waste their time giving my advice I have heard many times before. Certainly the number one on that page would be "some people have already tried to make a language specific computer... using Lisp or something like that... and it was a horrible failure, so you should do a language neutral processor instead or just use what is commercially available".
The problem with this advice is that it assumes that a "language neutral processor" exists and is a good thing. If we define that to mean "a processor so primitive and horrible that no high level language is practical on it so you limit yourself to assembly" then I have seen a few. Other than that, I have never seen such a beast since the 8 bit microprocessor days (or early mainframes or early minis - each generation recapitulated what the previous one had done so it would be redudant to tell the same story three times).
The 8086 was a mix of features to support the assembly programmer and to deal with Pascal. The segments, the frame pointer and similar stuff made it trivial to implement the UCSD virtual machine on it or have Turbo Pascal generate code for it. Meanwhile, Intel's 32 bit iAPX432 started life as a PL/I+objects machine and was converted into an Ada+objects processor when that language started to make headlines (even before it was defined!) and some of what was learned in that design was applied to the 80286 (which was even renamed iAPX286 to reflect this). Unfortunately for Intel the C/Unix virus was taking over the market and their pride and joy couldn't run C. Oh, it could run something very much like C with near and far pointers and other horrors or it could run real C if you contented yourself with 128KB or less, but you couldn't just take some application running on the VAX and recompile it for the 286.
Motorola had done something halfway between a PDP-11 and a VAX in its 68000 processor. So even though it put a lot of effort into Pascal (see LINK and UNLINK instructions, for example) it naturally did well with the "portable PDP-11 assembly" that is C. But they got scared with all the Ada/Modula-2 noise Intel and National were making and made the 68020 a Modula-2 machine to compete with National's 32032. Except that the market was interested in C, so Motorola actually removed the extra features from the 68030. And nobody noticed :-)
Intel decided that "if you can't beat them, join them" with its 386 processor. With some very well thought out hacks it was a hybrid capable of dealing with C and Unix as well as its competitors while at the same time running all the old code, including what really mattered through the new "virtual 8086" mode. Meanwhile, some university people figured out that these C and assembly processors were more expensive and slower than what a pure C processor would be and soon their RISCs were putting commercial cpus to shame. Intel split its 486 into a pure C part and an extras part in order to keep up, and followed NextGen's (bought by AMD) lead into the core RISC plus translation style with its Pentium Pro (also released as the Pentium II, Pentium II and Pentium M not to mention Celerons and stuff like that). The Pascal and iAPX432 stuff is still in our x86 processors today (as is the BCD math from the 8080 days) but the designers don't put any effort on them and so it is far faster to do the same thing with lots of C instructions instead.
Now that we have a world where every OS is Unix (yes, I even count Windows NT in that crowd), every language is C or a front end for it and every PC has a x86 processor it is no wonder that people can't imagine alternatives, nor even imagine why you might want to imagine alternatives. I have heard many times in the past couple of years that "C is the actually language of the machine, so even if you do stuff in other languages it has to be translated into C or you need short fragments of C code". I normally point out "just try clearing the cache from C" but in a way they are right. But I don't agree that this is how things have to be.
Oh, and about those Lisp Machines: they died off but so did the VAX which were as "language neutral" as anything we have today.
-- Jecel