[Vm-dev] How does Time>millisecondClockValue get a resolution of 1 millisecond?

Eliot Miranda eliot.miranda at gmail.com
Wed Aug 8 21:50:44 UTC 2012

Hi Louis,

On Wed, Aug 8, 2012 at 1:12 PM, Louis LaBrunda <Lou at keystone-software.com>wrote:

> Hi VM Guys,
> How does Squeak's Time>millisecondClockValue get a resolution of 1
> millisecond?  It is primitive: 135.  I thought it was based upon an OS
> function that kept a millisecond clock from when the OS was booted like in
> my case for Windows GetTickCount.  The resolution of the GetTickCount
> function is limited to the resolution of the system timer, which is
> typically in the range of 10 milliseconds to 16 milliseconds.
> On my machine with VA Smalltalk the resolution seems to be about 15
> milliseconds.  Yet, in Squeak it is 1 millisecond.  So it would seem the
> Squeak VM is using something else.

Well, things are different across platforms, and different between the Cog
and the Interpreter VMs.  But on WIndows millisecond time is derived from
timeGetTime, which answers milliseconds since Windows booted.  The
different between Cog and the Interpreter is that the Interpreter accesses
timeGetTime directly to answer milliseconds, whereas Cog updates the time
in a background thread every 1 or two milliseconds, and answers the saved
value.  So you may see the effective resolution in Cog be only 2
milliseconds, not 1.


> Lou
> -----------------------------------------------------------
> Louis LaBrunda
> Keystone Software Corp.
> SkypeMe callto://PhotonDemon
> mailto:Lou at Keystone-Software.com http://www.Keystone-Software.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20120808/0c33aee3/attachment.htm

More information about the Vm-dev mailing list