[Vm-dev] Time primitive precision versus accuracy (was: [Pharo-dev] strange time / delay problems on pharo-contribution-linux64-4.ci.inria.fr

David T. Lewis lewis at mail.msen.com
Thu Mar 6 00:59:17 UTC 2014


On Tue, Mar 04, 2014 at 03:13:26PM -0800, Eliot Miranda wrote:
>  
> On Tue, Mar 4, 2014 at 2:37 PM, Sven Van Caekenberghe <sven at stfx.eu> wrote:
> >
> > There is a big difference in how DateAndTime class>>#now works in 2.0 vs
> > 3.0. The former uses #millisecondClockValue while the latter uses
> > #microsecondClockValue. Furthermore 2.0 does all kind of crazy stuff with a
> > loop and Delays to try to improve the accuracy, we threw all that out.
> >
> > Yeah, I guess the clock is just being updated slowly, maybe under heavy
> > load. The question is where that happens. I think in the virtualised OS.
> >
> 
> That makes sense.  In the VM the microsecond clock is updated on every
> heartbeat and the heartbeat should be running at about 500Hz.  Can anyone
> with hardware confirm that in 3.0 the time on linux does indeed increase at
> about 500Hz?
> 
> Note that even in 2.0 the time is being derived from the same basis.  In
> Cog, the effective time basis is the 64-bit microsecond clock maintained by
> the heartbeat and even the secondClock and millisecondClock are derived
> from this.  So its still confusing why there should be such a difference
> between 2.0 and 3.0.
> 

The separate thread for timing may be good for profiling, but it is not such
a good idea for the time primitives. When the image asks for "time now" it
means now, not whatever time it was when the other thread last took a sample.
By reporting the sampled time, we get millisecond accuracy (or worse) reported
with microsecond precision.

If you collect primUTCMicrosecondClock in a loop with the primitive as
implemented in Cog, then plot the result, you get a staircase with long
runs of the same time value, and sudden jumps every one and a half milliseconds
or so. The equivalent plot for the primitive as implemented in the interpreter
VM is a smoothly increasing slope.

To illustrate, if you run the example below with Cog to collect as many
"time now" data points as possible within a one second time period, you find
a large number of data points at microsecond precision, but only a small number
of distinct time values within that period.

  "Cog VM"
  oc := OrderedCollection new.
  now := Time primUTCMicrosecondClock.
  [(oc add: Time primUTCMicrosecondClock - now) < 1000000] whileTrue.
  oc size. ==> 2621442
  oc asSet size. ==> 333

In contrast, an interpreter VM with no separate timer thread collects fewer samples
(because it is slower) but the sample values are distinct and increase monotonically
with no stair stepping effect.

  "Interpreter VM"
  oc := OrderedCollection new.
  now := Time primUTCMicrosecondClock.
  [(oc add: Time primUTCMicrosecondClock - now) < 1000000] whileTrue.
  oc size. ==> 246579
  oc asSet size. ==> 246579

Dave



More information about the Vm-dev mailing list