On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be phil@highoctane.bewrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be phil@highoctane.bewrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be phil@highoctane.bewrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
But he said increase in speed, not performance.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be phil@highoctane.bewrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be <phil@highoctane.be
wrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
On 12 December 2013 23:57, Eliot Miranda eliot.miranda@gmail.com wrote:
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
not much apart from the author's lack of literacy :)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < phil@highoctane.be> wrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
-- best, Eliot
Google wants to beat the speed of iOS. iOS uses native code and Objective-C highly optimized dispatch loop. So, with a less powerful piece of hardware, Apple devices trounce Android ones when it comes to smoothness of animations, battery etc.
As Android uses the Dalvik VM, the only way to get as speedy as iOS is to go as close to native as possible.
We are talking 60 fps for a lot of pixels these days. My Nexus 7 with 4.4.2 still feels laggy compared to my iPad2. That is due to the whole VM/Java thing. iPad2 is 2-core, Nexus 7 is 4-core and with a NVidia graphics chip.
AOT is working very well when it comes to Java code (well, Android Dalvik bytecode here). http://en.wikipedia.org/wiki/AOT_compiler
AOT is getting a lot of traction these days. LLVM also has this.
So, why care about 50% or 100%, the point is that it is getting faster in a way that affects the user perception. Which is what matters.
For example, Pharo feels slow compared to Dolphin, Smalltalk/X etc. Maybe the VM is of equivalent speed. Not the user experience.
Phil
On Fri, Dec 13, 2013 at 12:13 AM, Igor Stasenko siguctua@gmail.com wrote:
On 12 December 2013 23:57, Eliot Miranda eliot.miranda@gmail.com wrote:
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
not much apart from the author's lack of literacy :)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < phil@highoctane.be> wrote:
> > > http://m.pocketnow.com/2013/11/13/dalvik-vs-art >
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
-- best, Eliot
-- Best regards, Igor Stasenko.
If you want to know if it's the VM, I'd say just try Cuis.
2013/12/13 phil@highoctane.be phil@highoctane.be
Google wants to beat the speed of iOS. iOS uses native code and Objective-C highly optimized dispatch loop. So, with a less powerful piece of hardware, Apple devices trounce Android ones when it comes to smoothness of animations, battery etc.
As Android uses the Dalvik VM, the only way to get as speedy as iOS is to go as close to native as possible.
We are talking 60 fps for a lot of pixels these days. My Nexus 7 with 4.4.2 still feels laggy compared to my iPad2. That is due to the whole VM/Java thing. iPad2 is 2-core, Nexus 7 is 4-core and with a NVidia graphics chip.
AOT is working very well when it comes to Java code (well, Android Dalvik bytecode here). http://en.wikipedia.org/wiki/AOT_compiler
AOT is getting a lot of traction these days. LLVM also has this.
So, why care about 50% or 100%, the point is that it is getting faster in a way that affects the user perception. Which is what matters.
For example, Pharo feels slow compared to Dolphin, Smalltalk/X etc. Maybe the VM is of equivalent speed. Not the user experience.
Phil
On Fri, Dec 13, 2013 at 12:13 AM, Igor Stasenko siguctua@gmail.comwrote:
On 12 December 2013 23:57, Eliot Miranda eliot.miranda@gmail.comwrote:
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
not much apart from the author's lack of literacy :)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
> > > > > On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < > phil@highoctane.be> wrote: > >> >> >> http://m.pocketnow.com/2013/11/13/dalvik-vs-art >> > > "The benefits? Some sources are reporting a 50% increase in speed. > Others say it’s closer to 100%. Many claim they’ve seen their battery life > increase by 25% or more!" > > Got to love those 100% speed increases. Apps really fly when they > take no time at all... > > -- > Eliot > >
-- best, Eliot
-- best, Eliot
-- Best regards, Igor Stasenko.
On Fri, Dec 13, 2013 at 01:43:03AM +0100, Nicolas Cellier wrote:
If you want to know if it's the VM, I'd say just try Cuis.
+1
Not to say that VM performance isn't important, it absolutely is. So is memory, and so it a fast CPU. But nothing beats clean design and implementation, and Cuis is proof of that.
Dave
2013/12/13 phil@highoctane.be phil@highoctane.be
Google wants to beat the speed of iOS. iOS uses native code and Objective-C highly optimized dispatch loop. So, with a less powerful piece of hardware, Apple devices trounce Android ones when it comes to smoothness of animations, battery etc.
As Android uses the Dalvik VM, the only way to get as speedy as iOS is to go as close to native as possible.
We are talking 60 fps for a lot of pixels these days. My Nexus 7 with 4.4.2 still feels laggy compared to my iPad2. That is due to the whole VM/Java thing. iPad2 is 2-core, Nexus 7 is 4-core and with a NVidia graphics chip.
AOT is working very well when it comes to Java code (well, Android Dalvik bytecode here). http://en.wikipedia.org/wiki/AOT_compiler
AOT is getting a lot of traction these days. LLVM also has this.
So, why care about 50% or 100%, the point is that it is getting faster in a way that affects the user perception. Which is what matters.
For example, Pharo feels slow compared to Dolphin, Smalltalk/X etc. Maybe the VM is of equivalent speed. Not the user experience.
Phil
On Fri, Dec 13, 2013 at 12:13 AM, Igor Stasenko siguctua@gmail.comwrote:
On 12 December 2013 23:57, Eliot Miranda eliot.miranda@gmail.comwrote:
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
not much apart from the author's lack of literacy :)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
> > I would interpret twice as fast, and that's only 50% rather than 100% > reduction of execution time :) >
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
> > > 2013/12/12 Eliot Miranda eliot.miranda@gmail.com > >> >> >> >> >> On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < >> phil@highoctane.be> wrote: >> >>> >>> >>> http://m.pocketnow.com/2013/11/13/dalvik-vs-art >>> >> >> "The benefits? Some sources are reporting a 50% increase in speed. >> Others say it?s closer to 100%. Many claim they?ve seen their battery life >> increase by 25% or more!" >> >> Got to love those 100% speed increases. Apps really fly when they >> take no time at all... >> >> -- >> Eliot >> >> > >
-- best, Eliot
-- best, Eliot
-- Best regards, Igor Stasenko.
That's completely germane, but he didn't say 100% faster, it's me that said twice as fast. Speed is measurable say as number of operations per second (ops), if my score is 10 ops, then a 100% increase of speed => 20 ops. I love percents, for you can say one thing, and be sure another will be understood ;)
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < phil@highoctane.be> wrote:
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
-- best, Eliot
On Thu, Dec 12, 2013 at 5:16 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
That's completely germane, but he didn't say 100% faster, it's me that said twice as fast.
I posted a quote. "Others say it’s closer to 100%.". As Igor said, it's illiterate nonsense. I posted a jokey reply.
Speed is measurable say as number of operations per second (ops), if my
score is 10 ops, then a 100% increase of speed => 20 ops. I love percents, for you can say one thing, and be sure another will be understood ;)
That's why one must use a well-defined unambiguous measure of performance: new - old / old, hence 100% faster is at best ambiguous and at worse meaningless.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 2:53 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
But he said increase in speed, not performance.
Same argument. What does it mean to complete a task 100% faster? What does it mean to reach your destination 100% faster?
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:23 PM, Nicolas Cellier < nicolas.cellier.aka.nice@gmail.com> wrote:
I would interpret twice as fast, and that's only 50% rather than 100% reduction of execution time :)
relative performance = new time - old time / old time. 2x = -50% 3x = -75% etc
-100% means new time = 0, and performance is infinite.
2013/12/12 Eliot Miranda eliot.miranda@gmail.com
On Thu, Dec 12, 2013 at 1:08 PM, phil@highoctane.be < phil@highoctane.be> wrote:
> > > http://m.pocketnow.com/2013/11/13/dalvik-vs-art >
"The benefits? Some sources are reporting a 50% increase in speed. Others say it’s closer to 100%. Many claim they’ve seen their battery life increase by 25% or more!"
Got to love those 100% speed increases. Apps really fly when they take no time at all...
-- Eliot
-- best, Eliot
-- best, Eliot
On 12-12-2013, at 1:20 PM, Eliot Miranda eliot.miranda@gmail.com wrote:
Got to love those 100% speed increases. Apps really fly when they take no time at all…
The downside is that infinitely fast computers crash instantly.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim Useful random insult:- Several nuts over fruitcake minimum.
On 12-12-2013, at 1:08 PM, phil@highoctane.be wrote:
What goes around, comes around I guess. This AOT stuff was proposed many years ago in an ACM journal; distribute code as something like source, process it on each machine at install, load, or early run time. The argument was that machines were by then fast enough that somewhat pre-digested source could be locally polished & ready to run without bothering the user.
I’ve argued for a related idea in our vm on a few occasions, ie keeping the translated code along with each method and saving it in the image, with obvious requirements for flushing the cached stuff if the image starts up on a different architecture. Another idea was to have a HonkinBigServer somewhere on the net that would deliver the digested versions of methods to a system a bit like Spoon, allowing even dinky machines to benefit. Assuming they had tolerable network access of course.
JIT compiling has the advantage that only what you actually run gets processed, but you lose the processed stuff on occasion (cache over fill, image startup, etc). Pre-processing means dealing with all the code, costing processing time, space, but saving run time, usually. I suspect a mix would provide a configurable mix of benefits that might be best.
Of course, the first thing that happens when a system gets faster or bigger is that some fool comes along and uses O(n^3) algorithms to replace some nice fast O(n) code.
tim -- tim Rowledge; tim@rowledge.org; http://www.rowledge.org/tim loggerheads - lumberjack sniffing addicts
On 12/13/2013 04:31 AM, tim Rowledge wrote:
I’ve argued for a related idea in our vm on a few occasions, ie keeping the translated code along with each method and saving it in the image, with obvious requirements for flushing the cached stuff if the image starts up on a different architecture.
Just like NativeBoost does today :)
And I think Exupery does/did something similar (not sure Bryce is still working on it - the archive seems silent since 2010).
regards, Göran
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
On Fri, Dec 13, 2013 at 1:07 PM, askoh askoh@askoh.com wrote:
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
Is the Java VM a relic of the past? Given portable devices is compiled code a relic of the past? Is a safe development environment with fast compile times a thing of the past? Their own conclusions imply that the answer is not yet:
"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.
Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
Good to know. So, can we debug using the VM and then deliver the Smalltalk compiled?
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
On Fri, Dec 13, 2013 at 1:45 PM, askoh askoh@askoh.com wrote:
Good to know. So, can we debug using the VM and then deliver the Smalltalk compiled?
in the right context yes. deploying on a cloud maybe. deploying on a phone, probably not.
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
Hello,
2013/12/13 Eliot Miranda eliot.miranda@gmail.com
On Fri, Dec 13, 2013 at 1:07 PM, askoh askoh@askoh.com wrote:
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
Is the Java VM a relic of the past? Given portable devices is compiled code a relic of the past? Is a safe development environment with fast compile times a thing of the past? Their own conclusions imply that the answer is not yet:
What do you mean by not yet ? Do you think that in a 10 or 20 years VMs will be obsolete ?
There's 1 detail I am not sure. By JIT in this article ( http://m.pocketnow.com/2013/11/13/dalvik-vs-art), does he mean the bytecode to native code generator only or the native code generator + inline cache management + adaptive recompiler.
I'm wondering, even if they have their code stored as native code instead of byte code, do they have some kind of native code generator for adaptive recompilation to reach such a performance ?
And how do they manage their inline caches ? As all methods are native, some monomorphic inline caches could be promoted to PIC due to 1 very rare case and then as they always keep the same n-code this send site would be slower forever. Does this mean they would need to empty inline caches sometimes ?
"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.
Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
-- best, Eliot
Isn't the Objective-C runtime a kind of VM? It might not be byte code based, but it does lots of other things that we need VMs for generally, like dynamic dispatch and such.
I realize the machinery is different, but if it walks like a duck…
Seems like the linked paper is another example of the "use the most popular VM" strategy, rather than an argument that "VMs are dead" to me. Feel free to fire back if I'm completely and dead wrong, but it just doesn't seem that different from Smalltalk on the JVM, etc…
Casey
On Sat, Dec 14, 2013 at 12:04 AM, Clément Bera bera.clement@gmail.comwrote:
Hello,
2013/12/13 Eliot Miranda eliot.miranda@gmail.com
On Fri, Dec 13, 2013 at 1:07 PM, askoh askoh@askoh.com wrote:
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
Is the Java VM a relic of the past? Given portable devices is compiled code a relic of the past? Is a safe development environment with fast compile times a thing of the past? Their own conclusions imply that the answer is not yet:
What do you mean by not yet ? Do you think that in a 10 or 20 years VMs will be obsolete ?
There's 1 detail I am not sure. By JIT in this article ( http://m.pocketnow.com/2013/11/13/dalvik-vs-art), does he mean the bytecode to native code generator only or the native code generator + inline cache management + adaptive recompiler.
I'm wondering, even if they have their code stored as native code instead of byte code, do they have some kind of native code generator for adaptive recompilation to reach such a performance ?
And how do they manage their inline caches ? As all methods are native, some monomorphic inline caches could be promoted to PIC due to 1 very rare case and then as they always keep the same n-code this send site would be slower forever. Does this mean they would need to empty inline caches sometimes ?
"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.
Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
-- best, Eliot
Hi Clément,
On Sat, Dec 14, 2013 at 12:04 AM, Clément Bera bera.clement@gmail.comwrote:
Hello,
2013/12/13 Eliot Miranda eliot.miranda@gmail.com
On Fri, Dec 13, 2013 at 1:07 PM, askoh askoh@askoh.com wrote:
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
Is the Java VM a relic of the past? Given portable devices is compiled code a relic of the past? Is a safe development environment with fast compile times a thing of the past? Their own conclusions imply that the answer is not yet:
What do you mean by not yet ? Do you think that in a 10 or 20 years VMs will be obsolete ?
Personally no. In fact I think the trend is towards ever more virtualization, ever more language-to-language translation, ever more modelling of computation (for example reversible debuggers). The cloud actually implies that computation is better managed virtually because hardware improves, crashes, moves, etc, and if there are long-running computations tying them down to one piece of hardware seems limited. Azul systems already build java-specific server that allows for processors to evolve their ISA over time.
There's 1 detail I am not sure. By JIT in this article (
http://m.pocketnow.com/2013/11/13/dalvik-vs-art), does he mean the bytecode to native code generator only or the native code generator + inline cache management + adaptive recompiler.
I don't know and I don't think the author knows enough to differentiate. It's not a well-written article. He claims AOT is a new concept; it's not. Microsoft's .Net has done this for a while. He claims "This code is mostly uncompiled. That means it’s slower than compiled code would be, but your device gets the “insulation” advantages that VMs provide" which is comparing apples to oranges (one can produce safe native code; a VM is an example of it, and there's no fundamental reason why jit code is slower than native code and in some cases it can be faster). So I don't think it's worth worrying. I'd rather read a well-written paper on KitKat than spend any more effort on this article.
I'm wondering, even if they have their code stored as native code instead of byte code, do they have some kind of native code generator for adaptive recompilation to reach such a performance ?
And how do they manage their inline caches ? As all methods are native, some monomorphic inline caches could be promoted to PIC due to 1 very rare case and then as they always keep the same n-code this send site would be slower forever. Does this mean they would need to empty inline caches sometimes ?
"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.
Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
-- best, Eliot
2013/12/17 Eliot Miranda eliot.miranda@gmail.com
Hi Clément,
On Sat, Dec 14, 2013 at 12:04 AM, Clément Bera bera.clement@gmail.comwrote:
Hello,
2013/12/13 Eliot Miranda eliot.miranda@gmail.com
On Fri, Dec 13, 2013 at 1:07 PM, askoh askoh@askoh.com wrote:
I am not a VM guy. But, is the Smalltalk in a C World article compiling Smalltalk to machine code to run without the VM? http://www.cl.cam.ac.uk/~dc552/papers/SmalltalkInACWorld.pdf
He talks about the VM being a relic of the past. Is that true?
Is the Java VM a relic of the past? Given portable devices is compiled code a relic of the past? Is a safe development environment with fast compile times a thing of the past? Their own conclusions imply that the answer is not yet:
What do you mean by not yet ? Do you think that in a 10 or 20 years VMs will be obsolete ?
Personally no. In fact I think the trend is towards ever more virtualization, ever more language-to-language translation, ever more modelling of computation (for example reversible debuggers). The cloud actually implies that computation is better managed virtually because hardware improves, crashes, moves, etc, and if there are long-running computations tying them down to one piece of hardware seems limited. Azul systems already build java-specific server that allows for processors to evolve their ISA over time.
I don't think the concept of no VM is so clear or straightforward. There are some projects (like BeeSmalltalk and Mist) that are claiming that they low-level implementation produce in some way a straight executable attached to the language application and not a VM. But does that not mean to have fewer abstraction levels or no virtualization. That means that progress on compilation toolchains and moving as much as possible to language-side loadable libraries provides lot of advantages and that perhaps you don't need to have a separate concept of a VM outside your language.
There's 1 detail I am not sure. By JIT in this article (
http://m.pocketnow.com/2013/11/13/dalvik-vs-art), does he mean the bytecode to native code generator only or the native code generator + inline cache management + adaptive recompiler.
I don't know and I don't think the author knows enough to differentiate. It's not a well-written article. He claims AOT is a new concept; it's not. Microsoft's .Net has done this for a while. He claims "This code is mostly uncompiled. That means it’s slower than compiled code would be, but your device gets the “insulation” advantages that VMs provide" which is comparing apples to oranges (one can produce safe native code; a VM is an example of it, and there's no fundamental reason why jit code is slower than native code and in some cases it can be faster). So I don't think it's worth worrying. I'd rather read a well-written paper on KitKat than spend any more effort on this article.
I'm wondering, even if they have their code stored as native code instead of byte code, do they have some kind of native code generator for adaptive recompilation to reach such a performance ?
And how do they manage their inline caches ? As all methods are native, some monomorphic inline caches could be promoted to PIC due to 1 very rare case and then as they always keep the same n-code this send site would be slower forever. Does this mean they would need to empty inline caches sometimes ?
"Our current approach lacks some of the advantages of Smalltalk. The most obvious of these is debugging. Our current implementation emits very sparse DWARF debugging information and so is fairly limited in terms of debugging support even in comparison to C, and therefore a long way behind the state of the art for Smalltalk circa 1980. This is currently the focus of ongoing work. Once this is done, then implementing things like thisContext making use of debug metadata become possible. In our current implementation, run-time introspection is only available for objects and variables bound to blocks, not for activation records.
Closely related is the rest of the IDE. In traditional Smalltalk implementations, the IDE is closely integrated with the execution environment. GNU Smalltalk is the major exception, and provides a model close to ours. Building a good IDE and debugger is beyond the scope of the LanguageKit project, but building these tools on top of LanguageKit is a goal of E ́toile ́."
All the best, Aik-Siong Koh
-- View this message in context: http://forum.world.st/Dalvik-vs-ART-Android-virtual-machines-and-the-battle-... Sent from the Squeak VM mailing list archive at Nabble.com.
-- best, Eliot
-- best, Eliot
vm-dev@lists.squeakfoundation.org