[Vm-dev] Unix heartbeat thread vs itimer

Eliot Miranda eliot.miranda at gmail.com
Mon Mar 20 18:50:32 UTC 2017


Hi Ben,

On Mon, Mar 20, 2017 at 9:43 AM, Ben Coman <btc at openinworld.com> wrote:

>
>
> On Sat, Jan 7, 2017 at 3:23 AM, Eliot Miranda <eliot.miranda at gmail.com>
> wrote:
>
>> First of all, for the heartbeat thread to work reliably it must run at
>> higher priority than the thread running Smalltalk code.
>>
>
> After reading pages 11-14 of [1] , page 11 of [2] , and then [3] I was
> wondering if the dynamic priorities of the Linux 2.6 scheduler would cause
> a sleeping heartbeat thread to *effectively* remain a higher priority than
> the Smalltalk execution thread.
>
> [1] http://www.isical.ac.in/~mandar/os/scheduling-linux.pdf
> [2] https://www.cs.columbia.edu/~smb/classes/s06-4118/l13.pdf
> [3] http://stackoverflow.com/questions/25577403/how-are-
> dynamic-priorities-of-threads-computed-in-linux-2-6-x
>
>
>> This is because its job is to cause Smalltalk code to break out at
>> regular intervals to check for events.  If the Smalltalk code is
>> compute-intensive then it will prevent the heartbeat thread from running
>> unless the heartbeat thread is running at a higher priority, and so it
>> will be impossible to receive input keys, etc. (Note that if event
>> collection was in a separate thread it would suffer the same issue;
>> compute intensive code would block the event collection thread unless it
>> was running at higher priority).
>>
>
> So I contrived the experiment below.  I'm not sure it properly represents
> what is happening with the threaded heartbeat, but it still may make
> interesting discussion.
>
>
> $vi dynamicPriorityHeartbeat.c
>
> #include <stdio.h>
> #include <sys/time.h>
> #include <stdlib.h>
> #include <time.h>
> #include <pthread.h>
>
> struct timespec programStartTime;
> int heartbeat, heartbeatCount;
>
>
> double elapsed(struct timespec start, struct timespec end)
> {
>         struct timespec temp;
>         double elapsed;
>         temp.tv_sec = end.tv_sec-start.tv_sec;
>         temp.tv_nsec = end.tv_nsec-start.tv_nsec;
>         if (temp.tv_nsec < 0)
>         {       temp.tv_nsec += 1000 * 1000 * 1000;
>                 temp.tv_sec -= 1;
>         }
>         elapsed = temp.tv_nsec / 1000 / 1000 ;
>         elapsed = temp.tv_sec + elapsed / 1000;
>         return elapsed;
> }
>
>
> void *heart()
> {
>     int i;
>     for(i=0; i<=heartbeatCount; i++)
>     {
>         printf("Heartbeat %02d ", i);
>         heartbeat=1;
>         usleep(500000);
>     }
>     heartbeat=0;
>     exit(0);
> }
>
>
> void runSmalltalk()
> {
>     struct timespec heartbeatTime;
>     double intenseComputation;
>
>     intenseComputation=0;
>     while(1)
>     {
>         if(!heartbeat)
>         {
>             intenseComputation += 1;
>         }
>         else
>         {
>             heartbeat=0;
>             clock_gettime(CLOCK_REALTIME, &heartbeatTime);
>             printf("woke at time %f ", elapsed(programStartTime,
> heartbeatTime));
>             printf("intenseComputation=%f\n", intenseComputation);
>         }
>     }
> }
>
>
> int main(int argc, char *argv[])
> {
>     clock_gettime( CLOCK_REALTIME, &programStartTime);
>     if(argc<2)
>     {       printf("Usage: %s heartbeatCount\n", argv[0]);
>             exit(1);
>     }
>     heartbeatCount = atoi(argv[1]);
>     heartbeat=0;
>
>     pthread_t heartbeatThread;
>     if(pthread_create(&heartbeatThread, NULL, heart, NULL))
>     {
>         fprintf(stderr, "Error creating thread\n");
>         return 1;
>     }
>
>     runSmalltalk();
> }
> /////////////////////////////////////////
>
>
> $ gcc dynamicPriorityHeartbeat.c -lpthread && ./a.out 50
>
> Heartbeat 00 woke at time 0.000000 intenseComputation=19006.000000
> Heartbeat 01 woke at time 0.500000 intenseComputation=124465517.000000
> Heartbeat 02 woke at time 1.000000 intenseComputation=248758350.000000
> Heartbeat 03 woke at time 1.500000 intenseComputation=373112100.000000
> Heartbeat 04 woke at time 2.000000 intenseComputation=495566843.000000
> Heartbeat 05 woke at time 2.500000 intenseComputation=619276640.000000
> ....
> Heartbeat 45 woke at time 22.503000 intenseComputation=5583834266.000000
> Heartbeat 46 woke at time 23.003000 intenseComputation=5708079390
> <(570)%20807-9390>.000000
> Heartbeat 47 woke at time 23.503000 intenseComputation=5831910783.000000
> Heartbeat 48 woke at time 24.003000 intenseComputation=5955439495.000000
> Heartbeat 49 woke at time 24.503000 intenseComputation=6078752118
> <(607)%20875-2118>.000000
> Heartbeat 50 woke at time 25.003000 intenseComputation=6202527610
> <(620)%20252-7610>.000000
>
>
> Those times a quite regular. A drift of 3 milliseconds over 25 seconds
> (0.012%).
> It seems the intense computation in the main thread does not block timely
> events in the heatbeat thread.
>
> Perhaps the heatbeat thread needed it static priority managed manually
> for with Linux 2.4,
> but maybe that is not required in 2.6 ??
>

I wonder what happens if and when the main thread starts idling?  Then
there's the concern that while it *may* just happen to work in particular
schedulers it won't work if there's a scheduler change.  It seems to me
that the safe thing is to use priorities and be sure.  But you should
easily be able to test the scheme above by modifying the VM to create a
heartbeat thread at the same priority as the main thread and then give it
some effectively infinite computation that won't cause an interrupt by
itself and see if you can interrupt it.  e.g.

SmallInteger minVal to: SmallInteger maxVal do:
    [:i| | sum |
     sum := 0.
     0 to: SmallInteger maxVal // 2 do:
        [:j|
         sum := j even ifTrue: [sum + j] ifFalse: [sum - j]]]

This loop shouldn't do any allocations so shouldn't cause a scavenging GC,
which would cause an interrupt check.  So if you can break into this one in
your modified VM then I'd say it was worth experimenting further with your
scheme.


> cheers -ben
>

_,,,^..^,,,_
best, Eliot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squeakfoundation.org/pipermail/vm-dev/attachments/20170320/63fd422a/attachment-0001.html>


More information about the Vm-dev mailing list