[squeak-dev] Time>>eventMillisecondClock and event timestamps

Eliot Miranda eliot.miranda at gmail.com
Wed Sep 16 23:37:06 UTC 2020


Hi Tim,


> On Sep 16, 2020, at 10:36 AM, tim Rowledge <tim at rowledge.org> wrote:
> 
> 
> 
>> On 2020-09-15, at 2:07 PM, Eliot Miranda <eliot.miranda at gmail.com> wrote:
>> 
>> 
>> 
>> So why don't we
>> - modify the VM to stamp events with the utc second clock, 
> 
> The one possible problem I can foresee is if any OS event stuff we try to talk to expects timestamps consistent with its own view; imagine getting some event that needs to be replied to with everything the same except for one field updated etc. If we've stuck our own timestamp in then it could bollix things up. No idea if such events actually exist outside the realm of ancient, faintly remembered, RISC OS cases.

Well, having the vm insert the timestamp into the event means that the timestamp can be accurate.

What we have now with the proposed changes is that the image timestamps events *immediately after the image retrieves the event via primGetNextEvent:*.  This could be significantly later than when the event was delivered to the vm, and quite different to what timestamp, if any, the GUI applied to the event.

My proposal merely determines the representation of the timestamp in the image.  If, for example, the is timestamps an e end with the time from system startup, then my proposal would require that we obtain the time of system startup and stamp the event delivered up to Smalltalk with the event timestamp plus the startup time, all respires enters as microseconds from the epoch. Mapping back to the GUI’s timestamp is therefore trivial.

So using utc microseconds is not the issue.  The issue is whether we derive the timestamp from the event itself, or we timestamp the event ourselves.  If the GUI timestamps events then overriding that timestamp in the image is incorrect.  If the GUI does not timestamp events then having the vm supply a timestamp at the earliest opportunity gives us more accurate timestamps.

So in fact your concern makes the case for the vm applying the timestamp:
- whether or not the GUI timestamps events is something platform-dependent that the vm must know about, and is in the best position to know about
- representing their event timestamp as utc microseconds does nothing other than apply an offset to an event’s timestamp.  The events actual timestamp is easily reconstructed given the scale and the offset of the timestamp from microseconds and the epoch.


> *IF* this seems to have any plausibility, then we could stick with using the OS timestamp for the incoming events, but derive our prim 135 tick value from
> 
> a) catching the first event, reading its timestamp, deriving an offset from our uSec tick, saving that and then doing the math anytime someone wants a prim 135 value, or
> 
> b) finding out which OS time api is being used by the OS to stamp events; it really should be documented. Of course, we have plenty of experience of OS doc being... imprecise.
> 
> tim
> --
> tim Rowledge; tim at rowledge.org; http://www.rowledge.org/tim
> Strange OpCodes: PIC: Permute Instruction Codes
> 
> 
> 


More information about the Squeak-dev mailing list