On Wed, Sep 3, 2008 at 7:56 AM, Dan Ingalls Dan@squeakland.org wrote:
David Griswold david.griswold.256@gmail.com wrote...
- Given who the developers are, and with Google behind it, it will be
the fastest JavaScript VM for a long time to come.
There's no doubt that V8 puts us on a new plateau, and the V8 team is good, and that the further attention being put on multiprocessing and security are critical. However VMs are not black magic, and you might be interested to read some comparisons with TraceMonkey...
http://weblogs.mozillazine.org/roadmap/archives/2008/09/tracemonkey_ update.html
One thing is clear: JavaScript *is* the assembly language of the Internet, at least for a few years now.
- Dan
Yes, it's becoming clear that V8 doesn't do anything that radically new or magical, and that it was specifically designed for JavaScript, not as any kind of universal dynamic language VM. I'm especially disappointed at Eliot's observation that it doesn't have a bytecode intermediate form, although they may do mixed-mode execution by first interpreting the AST.
I'm not sure that the tracemonkey benchmarks are very definitive, but at the least it seems like V8 isn't blowing everything else away.
But you are right; at least now that there are multiple fast JavaScript implementations, a lot more stuff will target it. -Dave
On Fri, Sep 5, 2008 at 12:17 PM, David Griswold david.griswold.256@gmail.com wrote:
But you are right; at least now that there are multiple fast JavaScript implementations, a lot more stuff will target it.
One interesting (if odd) just-released language that targets JavaScript is Objective-J: http://cappuccino.org/ . It's a near clone of Objective-C (only without the C), that compiles to JavaScript on the fly in the browser. For example:
import <Foundation/CPString.j>
@implementation CPString (Reversing)
- (CPString)reverse { var reversedString = "", index = [self length]; while(index--) reversedString += [self characterAtIndex: index]; return reversedString; }
@end
I suppose that's as close to Smalltalk running on V8 as we're likely to see for a while...
Avi
Avi Bryant wrote:
I suppose that's as close to Smalltalk running on V8 as we're likely to see for a while...
How about this?
http://www.squeaksource.com/ST2JS.html
It would be amazingly cool if someone could translate tinyBenchmarks through it (or some other benchmark) and see how that comes out ;-)
Cheers, - Andreas
Am 05.09.2008 um 22:25 schrieb Andreas Raab:
Avi Bryant wrote:
I suppose that's as close to Smalltalk running on V8 as we're likely to see for a while...
How about this?
http://www.squeaksource.com/ST2JS.html
It would be amazingly cool if someone could translate tinyBenchmarks through it (or some other benchmark) and see how that comes out ;-)
Cheers,
- Andreas
You are certainly aware of
http://www.cs.ucla.edu/~awarth/ometa/ometa-js
- Bert -
On Fri, Sep 5, 2008 at 4:04 PM, Avi Bryant avi@dabbledb.com wrote:
One interesting (if odd) just-released language that targets JavaScript is Objective-J: http://cappuccino.org/ . It's a near clone of Objective-C (only without the C), that compiles to JavaScript on the fly in the browser.
What is the point of Objective-J? I looked into it a while back and didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective-J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
- Stephen
On 5-Sep-08, at 1:38 PM, Stephen Pair wrote:
What is the point of Objective-J? I looked into it a while back and didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective-J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
I've never used Objective-J, so I don't know for sure, but one thing that I find attractive is that it introduces message sends. It's easy to forget, but object.doSomething() is *not* a message, it's a property access and function call. This causes real problems - see for example, all the advice against modifying the Object prototype, because it renders the for...in construct useless. If Objective-J solves that problem, it's valuable.
Colin
On Sep 5, 2008, at 9:49 AM, Colin Putney wrote:
On 5-Sep-08, at 1:38 PM, Stephen Pair wrote:
What is the point of Objective-J? I looked into it a while back and didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective- J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
I've never used Objective-J, so I don't know for sure, but one thing that I find attractive is that it introduces message sends. It's easy to forget, but object.doSomething() is *not* a message, it's a property access and function call. This causes real problems - see for example, all the advice against modifying the Object prototype, because it renders the for...in construct useless. If Objective-J solves that problem, it's valuable.
I'm not sure about that, but Objective-J does seem to be full Javascript with Objective-C layered message sending and class model above bare Javascript objects, so closures do seem to work, even if the target audience of Objective-C programmers will probably fall back on their usual idioms most of the time.
To me what is great about having fast JavaScript available is that
JavaScript *comes with* every browser, like Basic used to come with every PC, and
JavaScript, although not what I would call elegant(*), is quite flexible, and powerful
Together these enable all sorts of experiments with application architectures, with languages, and so on, all without needing installation, and thus free of all the problems of plugins, firewalls, etc., and therefore instantly sharable with researchers and users alike.
The Lively Kernel was born out of this point of view, and Alex's OMeta is a wonderful tool for exploring the language side of things. And there is a lot to discover right under our noses, I believe. For instance some of the JavaScript subsets (AdSafe, Caja, Cajita, etc) do get into the simple, powerful realm, and could enjoy even greater speed. Also there are numerous experiments waiting to be done with capabilities, islands, and other approaches to true modularity. Until now there has been a substantial startup barrier but now any new world of experimental semantics can be just a click away, and fully integrated with al the other material on the web.
- Dan
On Fri, Sep 5, 2008 at 12:49 PM, Colin Putney cputney@wiresong.ca wrote:
On 5-Sep-08, at 1:38 PM, Stephen Pair wrote:
What is the point of Objective-J? I looked into it a while back and
didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective-J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
I've never used Objective-J, so I don't know for sure, but one thing that I find attractive is that it introduces message sends. It's easy to forget, but object.doSomething() is *not* a message, it's a property access and function call.
Could you say a bit more about why this is, if possible in the context of Self message sends - e.g. would you say that in Self
object doSomething
is not a message because it is a slot access?
This causes real problems - see for example, all the advice against modifying the Object prototype, because it renders the for...in construct useless.
for...in works as advertised enumerating every property associated with an object. hasOwnProperty is available in cases where it's necessary to distinguish between instance and inherited properties. The problems with for...in stem in large part from the large body of code in use written by people who either didn't know about or didn't understand prototype-based OO and thus *assumed* there were only instance properties. If one doesn't need libraries which make this above-mentioned assumption, modifying Object.prototype is no more an issue than modifying Object in Squeak. Depending on one's view about Namespaces, that could be a significant issue, but I don't see how it has anything to do with for...in.
Laurence
If Objective-J solves that problem, it's valuable.
Colin
It would be very interesting to compare Smee perf to this new generation. I'm also thinking that the hidden class strategy could be applied to Smee to make it incredibly fast. I'm not sure how this helps, but it is very interesting...
On Sep 5, 2008, at 3:38 PM, "Stephen Pair" stephen@pairhome.net wrote:
On Fri, Sep 5, 2008 at 4:04 PM, Avi Bryant avi@dabbledb.com wrote:
One interesting (if odd) just-released language that targets JavaScript is Objective-J: http://cappuccino.org/ . It's a near clone of Objective-C (only without the C), that compiles to JavaScript on the fly in the browser.
What is the point of Objective-J? I looked into it a while back and didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective-J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
- Stephen
On Fri, Sep 5, 2008 at 3:17 PM, David Griswold <david.griswold.256@gmail.com
wrote:
Yes, it's becoming clear that V8 doesn't do anything that radically new or magical, and that it was specifically designed for JavaScript, not as any kind of universal dynamic language VM. I'm especially disappointed at Eliot's observation that it doesn't have a bytecode intermediate form, although they may do mixed-mode execution by first interpreting the AST.
Why is that a bad thing? I actually thought that was one of the most interesting aspects. Bytecodes can provide you a concise portable format, but you could also do that by compressing or otherwise condensing source code (which I guess one way of condensing is to map to bytecode). I'm not saying bytecodes wouldn't be desirable, but there's something appealing (to me) about a direct translation from source to machine code and I'm curious what other advantages bytecodes might have.
Also, in a really pure OO VM and language, what would the bytecode set reduce to? Three instructions? push, pop and send?
- Stephen
On Fri, Sep 5, 2008 at 1:30 PM, Stephen Pair stephen@pairhome.net wrote:
On Fri, Sep 5, 2008 at 3:17 PM, David Griswold < david.griswold.256@gmail.com> wrote:
Yes, it's becoming clear that V8 doesn't do anything that radically new or magical, and that it was specifically designed for JavaScript, not as any kind of universal dynamic language VM. I'm especially disappointed at Eliot's observation that it doesn't have a bytecode intermediate form, although they may do mixed-mode execution by first interpreting the AST.
Why is that a bad thing? I actually thought that was one of the most interesting aspects. Bytecodes can provide you a concise portable format, but you could also do that by compressing or otherwise condensing source code (which I guess one way of condensing is to map to bytecode). I'm not saying bytecodes wouldn't be desirable, but there's something appealing (to me) about a direct translation from source to machine code and I'm curious what other advantages bytecodes might have.
Also, in a really pure OO VM and language, what would the bytecode set reduce to? Three instructions? push, pop and send?
push arg/temp pop-store temp push literal push self pop dup return top block return top send send super/outer et al push inst var (still need this even though it is only used in accessors) pop-store inst var (ditto)
plus some form of closure support, e.g. create blockpush new array push non-local temp pop-store non-local temp
which is not much fewer than there are now. Don't confuse encoding with semantics. My Squeak compiler (heavily derivative of the current Squeak compiler) currently has 34 opcodes distributed over 253 bytecodes. Of these, 7 are for optimizations other than inlining blocks, so reduce to 27. That's essentially twice as many as the list above which you get by dropping direct access to literals.
The number of opcodes for a pure OO language is around 15, slightly more than 3 :)
- Stephen
which is not much fewer than there are now. Don't confuse encoding with semantics. My Squeak compiler (heavily derivative of the current Squeak compiler) currently has 34 opcodes distributed over 253 bytecodes. Of these, 7 are for optimizations other than inlining blocks, so reduce to 27.
GNU Smalltalk has 22 opcodes that are very close to Eliot's list (I marked the changes with notes on the right):
SEND (n, super, num_args); << Eliot's "send" and "send super" PUSH_TEMPORARY_VARIABLE (n); PUSH_OUTER_TEMP (n, scopes); PUSH_LIT_VARIABLE (n); << could be push constant + #value PUSH_RECEIVER_VARIABLE (n); STORE_TEMPORARY_VARIABLE (n); STORE_OUTER_TEMP (n, scopes); STORE_LIT_VARIABLE (n); << could be #value: STORE_RECEIVER_VARIABLE (n); JUMP (ofs); << could be done with sends POP_JUMP_TRUE (ofs); << likewise POP_JUMP_FALSE (ofs); << likewise PUSH_INTEGER (n); << could be push constant PUSH_SELF; PUSH_SPECIAL (n); << could be push constant PUSH_LIT_CONSTANT (n); POP_INTO_NEW_STACKTOP (n); << Eliot's make array POP_STACK_TOP; MAKE_DIRTY_BLOCK; RETURN_METHOD_STACK_TOP; RETURN_CONTEXT_STACK_TOP; DUP_STACK_TOP;
(24 counting the "line number" mark and a special "exit interpreter" bytecode used by only one method in the entire system) distributed over 58 bytecodes. There are 6 unused bytecodes, and 192 more bytecodes are used for "composite" operations like
PUSH_LIT_CONSTANT (arg); MAKE_DIRTY_BLOCK ();
or
DUP_STACK_TOP (); PUSH_TEMPORARY_VARIABLE (arg); PUSH_INTEGER (1); PLUS_SPECIAL ();
My 2 cents,
Paolo
On Fri, Sep 5, 2008 at 1:30 PM, Stephen Pair stephen@pairhome.net wrote:
On Fri, Sep 5, 2008 at 3:17 PM, David Griswold < david.griswold.256@gmail.com> wrote:
Yes, it's becoming clear that V8 doesn't do anything that radically new or magical, and that it was specifically designed for JavaScript, not as any kind of universal dynamic language VM. I'm especially disappointed at Eliot's observation that it doesn't have a bytecode intermediate form, although they may do mixed-mode execution by first interpreting the AST.
Why is that a bad thing? I actually thought that was one of the most interesting aspects. Bytecodes can provide you a concise portable format, but you could also do that by compressing or otherwise condensing source code (which I guess one way of condensing is to map to bytecode). I'm not saying bytecodes wouldn't be desirable, but there's something appealing (to me) about a direct translation from source to machine code and I'm curious what other advantages bytecodes might have.
Bytecodes can have two completely independent purposes: an external format for transporting code, and an internal format for executing code. I am talking about the second purpose. For dynamic languages with tagging, compiled code is much more verbose than in languages like C. If you have to compile a large body of such code, it can use a huge amount of space, especially if you don't have a good optimizer.
That is why VisualWorks keeps only a cache of the current working set of compiled code, not all of the code that has been compiled. That is also why Strongtalk uses mixed-mode execution, i.e. only compiles hotspots, and interprets the rest of the code. For an example of what happens when you try to compile everything that runs, see Self, which sucked down at least 64MB to run the image back in the day (I don't know if that has changed).
Also, compiling code that isn't in a hotspot is a waste of time, since you spend much longer compiling the code than you would have spent interpreting it, and can cause big compile pauses if the compiler does much optimization.
Also, compiling code that isn't in a hotspot is a waste of time, since you spend much longer compiling the code than you would have spent interpreting it, and can cause big compile pauses if the compiler does much optimization.
You can use a simple 1-pass compilation (without even register allocation) that takes just a few ms for any reasonable amount of code. Don't forget that V8 runs (mainly) in a browser--if the browser is sufficiently snappy, the user won't notice a compilation pause of a few milliseconds.
Paolo
On Sun, Sep 7, 2008 at 4:22 AM, Paolo Bonzini bonzini@gnu.org wrote:
Also, compiling code that isn't in a hotspot is a waste of time, since you spend much longer compiling the code than you would have spent interpreting it, and can cause big compile pauses if the compiler does much optimization.
You can use a simple 1-pass compilation (without even register allocation) that takes just a few ms for any reasonable amount of code. Don't forget that V8 runs (mainly) in a browser--if the browser is sufficiently snappy, the user won't notice a compilation pause of a few milliseconds.
Such a compiler is fast, but it produces verbose code. I was talking about a large body of code, the sort that might begin to be used if V8 is used on the desktop for things like running an entire Smalltalk image, for example. In those sorts of cases, there is a difficult trade-off: if you use a fast compiler like the one you are talking about, it produces verbose code that consumes a lot of space, which was the problem for Self. On the other hand if you use a good compiler that produces more compact code, then you get large compilation pauses.
Using a code cache like in Visualworks or mixed-mode hotspot compilation like we did in Strongtalk and the JVM is how to get around that trade-off. The question is, what does V8 do to solve this problem? That is not yet clear. If they don't have some compact way of representing and executing uncommonly-used code, then it will have problems with large bodies of code.
On Sun, Sep 7, 2008 at 4:56 AM, David Griswold <david.griswold.256@gmail.com
wrote:
On Sun, Sep 7, 2008 at 4:22 AM, Paolo Bonzini bonzini@gnu.org wrote:
Also, compiling code that isn't in a hotspot is a waste of time, since you spend much longer compiling the code than you would have spent interpreting it, and can cause big compile pauses if the compiler does much optimization.
You can use a simple 1-pass compilation (without even register allocation) that takes just a few ms for any reasonable amount of code. Don't forget that V8 runs (mainly) in a browser--if the browser is sufficiently snappy, the user won't notice a compilation pause of a few milliseconds.
Such a compiler is fast, but it produces verbose code. I was talking about a large body of code, the sort that might begin to be used if V8 is used on the desktop for things like running an entire Smalltalk image, for example. In those sorts of cases, there is a difficult trade-off: if you use a fast compiler like the one you are talking about, it produces verbose code that consumes a lot of space, which was the problem for Self. On the other hand if you use a good compiler that produces more compact code, then you get large compilation pauses.
Using a code cache like in Visualworks or mixed-mode hotspot compilation like we did in Strongtalk and the JVM is how to get around that trade-off. The question is, what does V8 do to solve this problem? That is not yet clear. If they don't have some compact way of representing and executing uncommonly-used code, then it will have problems with large bodies of code.
David is exactly right. I think v8 does not yet have to face this problem because it is compiling javascript programs in web pages where the largest programs are a few hundred k bytes of source (e.g. the lively kernel at around 300k).
That doesn't invalidate v8 for its intended use. As they find it doesn't scale they'll have to evolve the implementation. But right now v8 is the fasted js engine on the planet and it works just fine :)
You can use a simple 1-pass compilation (without even register allocation) that takes just a few ms for any reasonable amount of code. Such a compiler is fast, but it produces verbose code.
David is exactly right. I think v8 does not yet have to face this problem because it is compiling javascript programs in web pages where the largest programs are a few hundred k bytes of source (e.g. the lively kernel at around 300k).
Yes, that's what I meant too. They can afford a fast nonoptimizing compiler, at least now.
Paolo
On Sun, Sep 7, 2008 at 7:14 AM, Eliot Miranda eliot.miranda@gmail.comwrote:
[...] David is exactly right. I think v8 does not yet have to face this problem because it is compiling javascript programs in web pages where the largest programs are a few hundred k bytes of source (e.g. the lively kernel at around 300k). [...]
It would be really interesting to see how V8's memory usage and startup time increase when running the lively kernel.
squeak-dev@lists.squeakfoundation.org