Branch: refs/heads/Cog
Home: https://github.com/OpenSmalltalk/opensmalltalk-vm
Commit: 73f30ab892b238a732ce3e0e9a26388356e50737
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/73f30ab892b238a732…
Author: David T. Lewis <lewis(a)mail.msen.com>
Date: 2021-08-24 (Tue, 24 Aug 2021)
Changed paths:
M src/plugins/VectorEnginePlugin/VectorEnginePlugin.c
Log Message:
-----------
Update VectorEnginePlugin
commit 86377f4de7ebd7682f049bef2b4557ac1b91b569
Author: Juan Vuletich <juan(a)jvuletich.org>
Date: Sun Aug 22 16:50:25 2021 -0300
Fix for an unlikely invalid access in VectorEnginePlugin: always pass contour as argument.
Hi Tim,
> On Aug 23, 2021, at 11:26 AM, OpenSmalltalk-Bot ***(a)***.***> wrote:
>
>
>
>
> > On 2021-08-22, at 11:55 PM, Marcel Taeumel ***(a)***.***> wrote:
> >
> >
> > I can confirm that compilation performance (e.g. Morph compileAll) is back to normal in cog.spur :-)
>
> On Raspberry Pi 4 v8 it compiles cleanly but appears to be ~10% slower on the tinybenchmarks. I know they're not exactly high-value tests but the difference is noticeable
I have reason not to believe this. In all the recent changes, no change has been made to the code generated for tinyBenchmarks. And I’m confident that the other changes made (save the now fixed regression) yield significant speed ups. Is there any other reason the timings could have changed?
_,,,^..^,,,_ (phone)
>
> tim
> --
> tim Rowledge; ***(a)***.***; http://www.rowledge.org/tim
> Strange OpCodes: PNG: Pass Noxious Gas
>
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/12b0d50876312a74c5…
Branch: refs/heads/Cog
Home: https://github.com/OpenSmalltalk/opensmalltalk-vm
Commit: 12b0d50876312a74c51be3a81232b55a4b3efb7f
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/12b0d50876312a74c5…
Author: Eliot Miranda <eliot.miranda(a)gmail.com>
Date: 2021-08-21 (Sat, 21 Aug 2021)
Changed paths:
M spur64src/vm/cogit.h
M spur64src/vm/cogitARMv8.c
M spur64src/vm/cogitX64SysV.c
M spur64src/vm/cogitX64WIN64.c
M spur64src/vm/cointerp.c
M spur64src/vm/cointerp.h
M spur64src/vm/cointerpmt.c
M spur64src/vm/cointerpmt.h
M spur64src/vm/gcc3x-cointerp.c
M spur64src/vm/gcc3x-cointerpmt.c
M spurlowcode64src/vm/cogit.h
M spurlowcode64src/vm/cogitARMv8.c
M spurlowcode64src/vm/cogitX64SysV.c
M spurlowcode64src/vm/cogitX64WIN64.c
M spurlowcode64src/vm/cointerp.c
M spurlowcode64src/vm/cointerp.h
M spurlowcode64src/vm/gcc3x-cointerp.c
M spurlowcodesrc/vm/cogit.h
M spurlowcodesrc/vm/cogitARMv5.c
M spurlowcodesrc/vm/cogitIA32.c
M spurlowcodesrc/vm/cointerp.c
M spurlowcodesrc/vm/cointerp.h
M spurlowcodesrc/vm/gcc3x-cointerp.c
M spursista64src/vm/cogit.h
M spursista64src/vm/cogitARMv8.c
M spursista64src/vm/cogitX64SysV.c
M spursista64src/vm/cogitX64WIN64.c
M spursista64src/vm/cointerp.c
M spursista64src/vm/cointerp.h
M spursista64src/vm/gcc3x-cointerp.c
M spursistasrc/vm/cogit.h
M spursistasrc/vm/cogitARMv5.c
M spursistasrc/vm/cogitIA32.c
M spursistasrc/vm/cointerp.c
M spursistasrc/vm/cointerp.h
M spursistasrc/vm/gcc3x-cointerp.c
M spursrc/vm/cogit.h
M spursrc/vm/cogitARMv5.c
M spursrc/vm/cogitIA32.c
M spursrc/vm/cointerp.c
M spursrc/vm/cointerp.h
M spursrc/vm/cointerpmt.c
M spursrc/vm/cointerpmt.h
M spursrc/vm/gcc3x-cointerp.c
M spursrc/vm/gcc3x-cointerpmt.c
M src/vm/cogit.h
M src/vm/cogitARMv5.c
M src/vm/cogitIA32.c
M src/vm/cointerp.c
M src/vm/cointerp.h
M src/vm/gcc3x-cointerp.c
Log Message:
-----------
CogVM source as per VMMaker.oscog-eem.3047
Fix the performance regression for compilation performance introduced by
VMMaker.oscog-eem.2994 (primitiveFlushCacheByMethod).
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.3046.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.3046
Author: eem
Time: 21 August 2021, 6:06:25.913827 pm
UUID: 981441cf-e580-43c9-b601-3d7278b756f5
Ancestors: VMMaker.oscog-eem.3045
Fix the performance regression for compilation performance introduced by VMMaker.oscog-eem.2994.
=============== Diff against VMMaker.oscog-eem.3045 ===============
Item was changed:
----- Method: CoInterpreterPrimitives>>primitiveFlushCacheByMethod (in category 'system control primitives') -----
primitiveFlushCacheByMethod
"The receiver is a compiledMethod. Clear all entries in the method lookup cache that
refer to this method, presumably because it has been redefined, overridden or removed.
+ Override to flush appropriate machine code caches also."
+ super primitiveFlushCacheByMethod.
+ cogit unlinkSendsTo: self stackTop andFreeIf: false!
- Override to flush the appropriate machine code state."
- self primitiveVoidVMStateForMethod!
A brief, ignorable, experience report..
One idea for speeding up lookups is to have a Selector class be a
subclass
of Symbol and when a Symbol becomes a Selector to have hash value fields
in the Selector. When a MethodDictionary is rehashed, there is a small
set of hashes which are tried and the most perfect (least collisions)
is used for that particular MethodDictionary.
Because of Spur, we can have several direct subclasses of
MathodDictionary
differing only in which hash value is accessed. Just change the ClassID
and the hash access method selects the proper hash value of symbols.
As long as the hash accessor stays polymorphic, the PIC mechanics will
work for us.
So the question becomes "Are there string/symbol hash functions where
this approach makes sense?". I.e. is there enough difference in hash
functions that we can get close(r) to perfect hashes and have better
results choosing one over another?
As I wait for Lulu to deliver Andres' book, I found some string
hash functions
[note:
https://azrael.digipen.edu/~mmead/www/Courses/CS280/HashFunctions-1.html
]
and took an a first look at this, counting collisions
for
(hash \\ dictSize) + 1
or more efficiently [note
https://craftinginterpreters.com/optimization.html]
(hash bitAnd: (dictSize - 1)) + 1
Early results:
vvv=========vvv
"Count has collisions for all selectors in each of 1527 methodDicts"
"For each Class methodDict, which hash has the fewest collisions?"
HashCandidates globalCollisionsCounts. "Lower is Better"
#hash collisions = 5466
#identityHash collisions = 6534
#hash0 collisions = 5331
#hash1 collisions = 5301
#hash2 collisions = 5419
#hash3 collisions = 5442
#hash4 collisions = 5436
#hash5 collisions = 14630
#hash6 collisions = 5442
#hash7 collisions = 5436
#hash8 collisions = 18708
#hash9 collisions = 25344
#hash10 collisions = 23518
#(5466 6534 5331 5301 5419 5442 5436 14630 5442 5436 18708 25344 23518)
"Compare hashs for each MethodDict; lowest collisions wins"
HashCandidates globalHashWinCounts. "Higher is Better"
#hash win count = 664
#identityHash win count = 610
#hash0 win count = 672
#hash1 win count = 680
#hash2 win count = 662
#hash3 win count = 679
#hash4 win count = 666
#hash5 win count = 429
#hash6 win count = 679
#hash7 win count = 666
#hash8 win count = 350
#hash9 win count = 247
#hash10 win count = 263
#(664 610 672 680 662 679 666 429 679 666 350 247 263)
^^^=========^^^
One quick observation is that string hash is better than identityHash
and
hash1 (K&R) is better than current string hash. Also, the hashes
used so far do not show much advantage against each other in the
better cases.
---
An unrelated idea is to "copy down" just megamorphic methods to
subclasses to speed
up megamorphic lookups.
Neither of these ideas requires JIT/recompile to implement, just IDE
changes.
Just playing around..
-KenD