"David M. Siegel" wrote:
At 09:42 PM 8/30/2001 +0100, you wrote:
I'm not sure I fully understand people's interests here & I'm not sure I'm knowledgeable enough to form my questions well. --Concerning speed: I'd expect the direction here needs to be more architectural than language specific. C on the fastest machines available next few years is not close the speed need for many problems.
Couldn't the Squeak CCodeGenerator be leveraged? Smalltalk's been used for distributed processing before. I'm thinking about using Squeak to model a processing architecture, then using it to generate custom low-level processing code, then using it to coordinate the distribution of data and processing across a farm of machines, where the low-level processing occurs in custom-generated C accessed from a Squeak supervisor.
I _don't_ think this is the place to start, but I think it's a strategy that can be used down the line to provide support for large-scale problems.
Just a thought, before the baby gets chucked out with the bathwater. Has anyone tried Squeak on a Beowulf cluster? On a Cray? On a mainframe? How does it scale?
David Buck's article in Smalltalk Chronicles seeks to lay to rest the idea that Smalltalk can't do serious number-crunching: "Have you heard that Smalltalk is inappropriate for real-time applications? Have you heard that it can't handle math-intensive applications? If so, you should look at ElastoLab."
Mind you, if we're talking of stuff as intensive as the Human Genome, we're talking of very serious number crunching. But one person I know that's working in part of this field is using an application developed in Delphi running on a P4!
Cheers
John