John,
First, thanks for taking the time, your feedback is informative as always.
On Jun 6, 2012, at 10:07 PM, John McIntosh wrote:
I've resisted reponding, but way less busy tonight.
Ok, in general pre-cog as I can't quite comment about how that works.
After most things are killed there is really only the Morphic polling loop running. What happens is it attempts about 50 times a second to wake up and service morphic steps and collect VM events and turn them into mophic events. That of course takes CPU.
Now when the morphic poll loop goes to sleep by scheduling a variable length delay to met the 50 times a second, what happens is what Igor alluded to some other process has to run otherwise the VM will terminate with no processes to run.
This process is the lowest level idle process. What it does is get passed a FAKE millisecond time. Note in the previous century I attempted to pass how log to sleep as the Delay logic knows that... But we found out after it was pushed to the squeak change stream that there was a deadlock... Oops.
Later I realized we know in the VM when the next wakeup time is.
So in the VM when it calls the primitive to do idle time sleep we can grab the next known wakeup time and sleep until then. or until some interrupt happens, like a file system, socket, UI input wakes us up early.
That sure seems to explain what's going on: my 50ms step time results in a significant fraction of the expected updates not to occur and it sounds like they essentially get missed/ignored unless the VM is awake due to other duties. I played around with various step times from 50-200ms and it seems to sync with your explanation (i.e. the Cog VM isn't terribly different in this regard?) At 200ms the idle slowdown nearly disappears.
Then we usually on unix run the async file handle logic to deal with pending semaphores for sockets etc. That all sounds fairly rational, until you instrument and discover things like you are asked to sleep, yet a delay a millisecond or two in the past needs to be serviced... You'll need to look at the platform support code to understand what happens, 50 times a second..
I think for COG there is also a high priority millisecond tick process running to update the millisecond/microsecond clock? That spins the CPU dial too.
So I guess that leaves me with one more question for John:
Are there any gotchas that come to mind re: increasing the frequency of the pre-cog vm timer aside from system overhead?
And a question related to Cog:
Depending on John's answer, and assuming the Cog VM is similar re: timing architecture, how difficult would it be to allow the image to dynamically set the timer frequency?
Finally, one specific to the Event-based Cog for Android:
Is it be possible to dynamically enable/disable timer-based operation? (I haven't yet gotten the code in question running on Android, but I'm guessing that the experience won't be fun for non-interactive code)
I'm not asking anyone to commit to anything, rather just trying to get a handle on the feasibility / order of magnitude effort related to the above questions. I've already talked myself into a project that involves some VM/plugin hacking, and just want to understand how much deeper I'd be digging the hole if I decide I really want this :-)
Thanks, Phil