[Vm-dev] Time primitive precision versus accuracy (was: [Pharo-dev] strange time / delay problems on pharo-contribution-linux64-4.ci.inria.fr

David T. Lewis lewis at mail.msen.com
Thu Mar 6 14:01:58 UTC 2014


On Thu, Mar 06, 2014 at 01:39:24PM +0100, Max Leske wrote:
>  
> 
> On 06.03.2014, at 13:00, vm-dev-request at lists.squeakfoundation.org wrote:
> 
> > Date: Wed, 5 Mar 2014 19:59:17 -0500
> > From: "David T. Lewis" <lewis at mail.msen.com>
> > Subject: [Vm-dev] Time primitive precision versus accuracy (was:
> > 	[Pharo-dev]	strange time / delay problems on
> > 	pharo-contribution-linux64-4.ci.inria.fr
> > To: Squeak Virtual Machine Development Discussion
> > 	<vm-dev at lists.squeakfoundation.org>
> > Message-ID: <20140306005917.GA6980 at shell.msen.com>
> > Content-Type: text/plain; charset=us-ascii
> > 
> > On Tue, Mar 04, 2014 at 03:13:26PM -0800, Eliot Miranda wrote:
> >> 
> >> On Tue, Mar 4, 2014 at 2:37 PM, Sven Van Caekenberghe <sven at stfx.eu> wrote:
> >>> 
> >>> There is a big difference in how DateAndTime class>>#now works in 2.0 vs
> >>> 3.0. The former uses #millisecondClockValue while the latter uses
> >>> #microsecondClockValue. Furthermore 2.0 does all kind of crazy stuff with a
> >>> loop and Delays to try to improve the accuracy, we threw all that out.
> >>> 
> >>> Yeah, I guess the clock is just being updated slowly, maybe under heavy
> >>> load. The question is where that happens. I think in the virtualised OS.
> >>> 
> >> 
> >> That makes sense.  In the VM the microsecond clock is updated on every
> >> heartbeat and the heartbeat should be running at about 500Hz.  Can anyone
> >> with hardware confirm that in 3.0 the time on linux does indeed increase at
> >> about 500Hz?
> >> 
> >> Note that even in 2.0 the time is being derived from the same basis.  In
> >> Cog, the effective time basis is the 64-bit microsecond clock maintained by
> >> the heartbeat and even the secondClock and millisecondClock are derived
> >> from this.  So its still confusing why there should be such a difference
> >> between 2.0 and 3.0.
> >> 
> > 
> > The separate thread for timing may be good for profiling, but it is not such
> > a good idea for the time primitives. When the image asks for "time now" it
> > means now, not whatever time it was when the other thread last took a sample.
> > By reporting the sampled time, we get millisecond accuracy (or worse) reported
> > with microsecond precision.
> > 
> > If you collect primUTCMicrosecondClock in a loop with the primitive as
> > implemented in Cog, then plot the result, you get a staircase with long
> > runs of the same time value, and sudden jumps every one and a half milliseconds
> > or so. The equivalent plot for the primitive as implemented in the interpreter
> > VM is a smoothly increasing slope.
> > 
> > To illustrate, if you run the example below with Cog to collect as many
> > "time now" data points as possible within a one second time period, you find
> > a large number of data points at microsecond precision, but only a small number
> > of distinct time values within that period.
> > 
> >  "Cog VM"
> >  oc := OrderedCollection new.
> >  now := Time primUTCMicrosecondClock.
> >  [(oc add: Time primUTCMicrosecondClock - now) < 1000000] whileTrue.
> >  oc size. ==> 2621442
> >  oc asSet size. ==> 333
> > 
> > In contrast, an interpreter VM with no separate timer thread collects fewer samples
> > (because it is slower) but the sample values are distinct and increase monotonically
> > with no stair stepping effect.
> > 
> >  "Interpreter VM"
> >  oc := OrderedCollection new.
> >  now := Time primUTCMicrosecondClock.
> >  [(oc add: Time primUTCMicrosecondClock - now) < 1000000] whileTrue.
> >  oc size. ==> 246579
> >  oc asSet size. ==> 246579
> 
> Wow, I didn?t know that.
> 
> But can that account for a delay not being executed (or prematurely terminated)?
> In the example I posted, the 10 second delay takes no time at all (second precision unfortunately because of the division by 1000).
> 

No, this is probably not related to the problem that you reported with delay
processing in Pharo 3.0. The time values reported by primUTCMicrosecondClock
are perfectly valid. I am just discussing how the accuracy of those values
might be improved, particularly in light of the implied precision.

The image needs to behave reasonably with the time values reported by the
primitive, regardless of their accuracy or precision.

Dave



More information about the Vm-dev mailing list