Hi,
I recently published a Chronology-Core version for using high resolution
clock.
On my 2.7GHz core i5 MBP (2015) I get this:
Time highResClockTicksPerMillisecond
2699824
=> OK, consistent with 2.7GHz
Time highResClock - Time highResClock * 1000000 // Time
highResClockTicksPerMillisecond.
-578 -563
=> Huh, invoking a primitive takes that long (500 to 600 nanoseconds)
But I can correct it. Let's try it:
[10 factorial] bench.
'14,000,000 per second. 71.2 microseconds per run.'
=> this is the reference result
(1 to: 10) collect: [:i |
| ticks |
ticks := Time highResClock.
10 factorial.
Time highResClock - ticks +
+ (Time highResClock - Time highResClock) "correction"
* 1000000 // Time highResClockTicksPerMillisecond "get nanoseconds"].
#(1309 247 88 84 74 69 71 71 71 69)
=> OK, first runs are a bit long, but we get 70ns per run as the reference
#(1977 191 143 148 142 120 122 122 117 120)
=> Oups??? Second run gives different results???
#(2239 180 143 143 142 116 117 117 117 114)
=> and third about the same than second...
Any idea how to explain that?