[squeak-dev] Re: Delay time question

Robert F. Scheer rfscheer at speakeasy.net
Thu Feb 28 09:27:35 UTC 2008


On Wed, 2008-02-27 at 23:38 -0800, Andreas Raab wrote:
> Robert F. Scheer wrote:
> > It's clear that the robot program architecture must change drastically
> > as a result.  Currently, there are perhaps 10 concurrent processes
> > looping 50 times per second servicing various serial lines carrying
> > sensor information or actuating motors and so forth.  Although real time
> > processing isn't required, it is mandatory that all processes and the
> > main decision loop complete every 20ms, no matter what.
> 
> I am not sure why your architecture has to change dramatically as a 
> result. If your main problem is to make sure that that a loop is done 
> every 20msecs and you need to adjust for statistical delay variations, 
> you can simply compute the actual time the wait took and do something like:
> 
>   nextDesiredTick := Time millisecondClockValue + 20.
>   [true] whileTrue:[
>     self doControl.
>     "note: for very, very long doControl the waitTime could be negative.
>      you have to decide whether it's better to skip or to run an extra
>      doControl in this case."
>     waitTime := nextDesiredTick - Time millisecondClockValue.
>     (Delay forMilliseconds: waitTime) wait.
>     "Next desired tick is twenty msecs from last one"
>     nextDesiredTick := nextDesiredTick + 20.
>   ].
> 
> Not only will this adjust to statistical variations in delay but it will 
> also take into account the amount of time spent in doControl, including 
> eventual time spent in other processes due to preemption. In other 
> words, the above is as close to 20msecs as it possibly gets.
> 

Yes, I see this could make a loop time average very close to exactly the
desired value over many cycles, but my main problem is in the serial i/o
loops.  

Picture 5 serial lines.  Each has a packet of a few to a few dozen bytes
arriving every 20ms although they are not synchronous.  The data arrives
on time within a few microseconds every cycle and it must be captured.  

I was trying to use delays of around 15ms to prevent the Squeak i/o
processes from continuously tying up compute cycles trying to read the
serial lines when no data was expected.  Read a packet, delay 15ms, then
go into a tighter loop trying every 1ms until the packet was read.  

Now, I know this is a bad scheme.  Even a 1ms programmed delay could
become longer than 20ms sometimes.  So let's get rid of all programmed
delays inside the serial i/o loops.  

Well, now don't we have a situation where the cpu is tied up 100% and
the latency to switch between processes is very poor?

I probably don't understand the primitive serial methods well enough.
They seem not to block while awaiting input so that requires
continuously looping to read.  Is there a blocking method?

> Also, keep in mind that no amount of C coding will increase the accuracy 
> of your OS' default timer. But if you happen to have a high-quality 
> timer it would be quite trivial to link this timer to the TimerSemaphore 
> and use it instead of whatever the VM gives you.
> 
> Cheers,
>    - Andreas
> 
> 




More information about the Squeak-dev mailing list