Bytecode question: the send instruction

John M McIntosh johnmci at smalltalkconsulting.com
Tue May 13 18:22:19 UTC 2003


Well back on march 8th 2001 I looked into this (at least the cache  
part).
So I'll quote a part from my discussion with Dan:

"Now in mapPointersInObjectsFromto we call
mapInterpreterOops
which calls flushMethodCache.

Now I was thinking. Watching snow fall is boring after awhile, that  
couldn't we be smart and delete entries in the cache by looking at it  
and the range of memory? The trade is how expensive to check and of  
course being sure it's correct.

So I moved flushMethodCache into mapPointersInObjectsFromto from it's  
location in mapInterpreterOops and created a new routine like below  
which considers if the cache values of oops map to a location within  
the range of interest, then I clear the cache value.

I ran this for a little while and found I had

1356 flush calls which only resulted in 856 cache entries being  
invalidated. This seems much better than flushing all 512 entries in  
the method cache on each incremental GC event. I also noted that in  
lookupInMethodCacheSelclass, the number of calls was about 50  
million,and cache hits: 20M on probe1, 16M on probe2, 11M on probe3 and  
1.8M not found

BTW I looked at atCache, but mmm 'since objects die young' found that  
over 70% of the at cache would be flushed at any given incremental GC  
event. (IE 70% of the time objects were in new space)."
----------------

The change made way back then was to flush the cache by location  
because most flush are a result are newspace GC invocations, and the  
number of methods in NewSpace is very low. You will note the fact that  
the method cache of 512 had a 96.4% cache hit ratio.

Since you can run the Squeak VM in Squeak I'd think you could easily  
build an instrumented VM that collects bytecode statistics to compare  
to number in the green book.

Also note that Scott A Crosby <crosby at qwes.math.cmu.edu>  on Sun Dec 9,  
2001wrote
"New method cache, 30% faster macrobenchmarks and ineffeciencies."

One should look in the archives for the details

Mmm now if I recall I'm sure Tim, Ian and I looked (yes in sept 2002)  
into just the caching changes and couldn't generate
the numbers that justified the complexity (or the extra memory) for the  
change required.

On Tuesday, May 13, 2003, at 07:01  AM, Alexandre Bergel wrote:

> Hello!
>
> By reading the green book, I have some question related the send  
> bytecode (page 210):
>  - "Approximately every third instruction is a message send and sends  
> requiring dictionary lookups occur every 6.667 bytecodes. Of the sends  
> needing dictionary searches, 36.64% invoked primitives, and the rest  
> resulted in the execution of a Smalltalk method which, along with  
> process switches, accounted for a context switch every 6.50 > bytecodes."
>  - "... 78.92% are arithmetic and logical operations..."
>
> This article was made in 1982, I would like to know it these  
> proportion are still equivalent nowadays.
>
> A bit related to this, what is the effect of the method cache? In the  
> average, what is the % of the method found in the cache?
>
> I heard once that only 1/10th of the message send are really  
> polymorphic, and the rest is mono-morphic.
>
> Cheers,
> Alexandre
>
> --  
> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
> Alexandre Bergel  http://www.iam.unibe.ch/~bergel
> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>
>
>
--
======================================================================== 
===
John M. McIntosh <johnmci at smalltalkconsulting.com> 1-800-477-2659
Corporate Smalltalk Consulting Ltd.  http://www.smalltalkconsulting.com
======================================================================== 
===



More information about the Squeak-dev mailing list