There has been a lot of discussion about Magma performance recently, and so I would like to try to address this question about performance.
The new 1.2 alpha just announced addresses two major performance bottlenecks; the finalization process and the slowness of become:. There are many improvements since 1.1; please use 1.2 releases for all new development. Note: 1.1 repositories require an upgrade to be read with 1.2.
I don't know of any tool or framework for Squeak or Pharo that comes close to offering the number and level of performance measuring and tuning tools as Magma.
MagmaBenchmarker is the place to start. It's available in the "Magma Tools" package. The numbers reported by the benchmarker are best-case, so it's a good litmus test of whether to consider Magma for the job.
There are also a bunch of statistics Magma captures while running. Statistics are captured by each client-session, and the server tallies its own statistics too. I did this for Hilaire a couple of years ago to improve Magma's performance over low-latency networks (ISDN); by compressing the data sent over sockets. This statistics-gathering enhancement is designed so that Magma users can simply print the report, paste it here on this list, and reveal a lot about how your application uses Magma.
Of course, MessageTally spies are always very useful too.
All of these diagnostic tools are useful in determining which performance tuning tools should be employed for improvement.
I would like to encourage use of these tools and, when all else fails, post a question here. If you can't include a change-set that replicates a problem, then background information with a Message spy will be helpful.
- Chris
On Thu, Nov 11, 2010 at 6:56 PM, Chris Muller asqueaker@gmail.com wrote:
There has been a lot of discussion about Magma performance recently, and so I would like to try to address this question about performance.
The new 1.2 alpha just announced addresses two major performance bottlenecks; the finalization process and the slowness of become:.
Did you manage to improve the #become: slowness ? if true, I am interesting in knowning how.
Thanks
mariano
There are many improvements since 1.1; please use 1.2 releases for all new development. Note: 1.1 repositories require an upgrade to be read with 1.2.
I don't know of any tool or framework for Squeak or Pharo that comes close to offering the number and level of performance measuring and tuning tools as Magma.
MagmaBenchmarker is the place to start. It's available in the "Magma Tools" package. The numbers reported by the benchmarker are best-case, so it's a good litmus test of whether to consider Magma for the job.
There are also a bunch of statistics Magma captures while running. Statistics are captured by each client-session, and the server tallies its own statistics too. I did this for Hilaire a couple of years ago to improve Magma's performance over low-latency networks (ISDN); by compressing the data sent over sockets. This statistics-gathering enhancement is designed so that Magma users can simply print the report, paste it here on this list, and reveal a lot about how your application uses Magma.
Of course, MessageTally spies are always very useful too.
All of these diagnostic tools are useful in determining which performance tuning tools should be employed for improvement.
I would like to encourage use of these tools and, when all else fails, post a question here. If you can't include a change-set that replicates a problem, then background information with a Message spy will be helpful.
- Chris
Magma mailing list Magma@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/magma
On 12 November 2010 06:14, Mariano Martinez Peck marianopeck@gmail.com wrote:
On Thu, Nov 11, 2010 at 6:56 PM, Chris Muller asqueaker@gmail.com wrote:
There has been a lot of discussion about Magma performance recently, and so I would like to try to address this question about performance.
The new 1.2 alpha just announced addresses two major performance bottlenecks; the finalization process and the slowness of become:.
Did you manage to improve the #become: slowness ? if true, I am interesting in knowning how.
No, you can't make #become: faster, because its implemented on VM side. However, you can avoid using it too often, and do it for large set of objects, not individual ones, but using a bulk-become primitive.
Thanks
mariano
There are many improvements since 1.1; please use 1.2 releases for all new development. Note: 1.1 repositories require an upgrade to be read with 1.2.
I don't know of any tool or framework for Squeak or Pharo that comes close to offering the number and level of performance measuring and tuning tools as Magma.
MagmaBenchmarker is the place to start. It's available in the "Magma Tools" package. The numbers reported by the benchmarker are best-case, so it's a good litmus test of whether to consider Magma for the job.
There are also a bunch of statistics Magma captures while running. Statistics are captured by each client-session, and the server tallies its own statistics too. I did this for Hilaire a couple of years ago to improve Magma's performance over low-latency networks (ISDN); by compressing the data sent over sockets. This statistics-gathering enhancement is designed so that Magma users can simply print the report, paste it here on this list, and reveal a lot about how your application uses Magma.
Of course, MessageTally spies are always very useful too.
All of these diagnostic tools are useful in determining which performance tuning tools should be employed for improvement.
I would like to encourage use of these tools and, when all else fails, post a question here. If you can't include a change-set that replicates a problem, then background information with a Message spy will be helpful.
- Chris _______________________________________________ Magma mailing list Magma@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/magma
Magma mailing list Magma@lists.squeakfoundation.org http://lists.squeakfoundation.org/mailman/listinfo/magma
Hi all!
On 11/12/2010 05:40 AM, Igor Stasenko wrote:
On 12 November 2010 06:14, Mariano Martinez Peckmarianopeck@gmail.com wrote:
Did you manage to improve the #become: slowness ? if true, I am interesting in knowning how.
No, you can't make #become: faster, because its implemented on VM side. However, you can avoid using it too often, and do it for large set of objects, not individual ones, but using a bulk-become primitive.
. Or try to use RoarVM - since it uses an object table the become: should be instantaneous. Now, of course I have no idea if it is plausible to use RoarVM at this point, just an observation.
regards, Göran
2010/11/12 Göran Krampe goran@krampe.se:
Hi all!
On 11/12/2010 05:40 AM, Igor Stasenko wrote:
On 12 November 2010 06:14, Mariano Martinez Peckmarianopeck@gmail.com wrote:
Did you manage to improve the #become: slowness ? if true, I am interesting in knowning how.
No, you can't make #become: faster, because its implemented on VM side. However, you can avoid using it too often, and do it for large set of objects, not individual ones, but using a bulk-become primitive.
. Or try to use RoarVM - since it uses an object table the become: should be instantaneous. Now, of course I have no idea if it is plausible to use RoarVM at this point, just an observation.
Yes, except that object table is slows down rest of stuff by itself :)
[OT] I wonder, whether it is generally better to use object tables comparing to direct pointers in general, for VM. And do these advantages can overcome a speed loss, because of additional level of indirection. [/OT]
regards, Göran
magma@lists.squeakfoundation.org