[Vm-dev] [Pharo-dev] Problem with OSSubprocess / signals / heartbeat ?

Petr Fischer petr.fischer at me.com
Sun Apr 2 17:13:04 UTC 2017


> On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer <petr.fischer at me.com> wrote:
> 
> > > On Thu, 30 Mar 2017, Eliot Miranda wrote:
> > >
> > > > Once the active process is in a tight loop the delay is effectively
> > > disabled because the tight loop effectively shuts out the heartbeat
> > thread
> > > and hence the system never notices that the delay has expired.
> > >
> > > I think that won't happen, because the process scheduler (O(1), CFS,
> > BFS) on
> > > linux is not cooperative. So, the kernel will periodically preempt the
> > main
> > > thread and run the heartbeat thread no matter what their priorities are.
> > The
> > > higher priority only provides lower jitter on the heartbeat thread.
> > >
> > > Levente
> >
> > Is there some test case or code, that I can run in Pharo and evaluate if
> > kernel sheduler is working correctly (with heartbeat thread at normal
> > priority).
> > I need to test it under FreeBSD.
> >
> > Thanks! pf
> >
> 
> Just for starters, what result do you get for my multi-priority fibonacci
> stress test...
> http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html
> cheers -ben

Output from your test C program (my laptop was cca 30% loaded with other processes [sha256 summing + disk IO] during test runs):

--------------------
Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz
No. of Cores:	4

N=1000000000 ; for NPROC in 1 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done
29971 @ 19	 ==> execution time 7:13689552
29973 @ 1	 ==> execution time 7:329530217
29975 @ 0	 ==> execution time 7:372957715

N=1000000000 ; for NPROC in 1 2 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done
30354 @ 0	 ==> execution time 8:503484639
30360 @ 0	 ==> execution time 8:825420373
30350 @ 19	 ==> execution time 10:419283219
30358 @ 1	 ==> execution time 12:449180718
30352 @ 1	 ==> execution time 12:607924228
30356 @ 19	 ==> execution time 14:783309741

N=1000000000 ; for NPROC in 1 2 3 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done
31041 @ 0	 ==> execution time 8:469494752
31045 @ 1	 ==> execution time 12:300027654
31049 @ 19	 ==> execution time 15:669175038
31047 @ 0	 ==> execution time 16:625971781
31053 @ 0	 ==> execution time 17:291503470
31039 @ 1	 ==> execution time 17:532778854
31051 @ 1	 ==> execution time 18:99766937
31037 @ 19	 ==> execution time 20:579350446
31043 @ 19	 ==> execution time 21:23873011

N=1000000000 ; for NPROC in 1 2 3 4 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done
31294 @ 0	 ==> execution time 10:22735629
31288 @ 0	 ==> execution time 17:412909550
31304 @ 1	 ==> execution time 17:938611595
31292 @ 1	 ==> execution time 17:950983991
31300 @ 0	 ==> execution time 18:652930065
31298 @ 1	 ==> execution time 21:613235002
31306 @ 0	 ==> execution time 22:996656086
31286 @ 1	 ==> execution time 23:865226663
31302 @ 19	 ==> execution time 25:605616888
31296 @ 19	 ==> execution time 25:825197854
31290 @ 19	 ==> execution time 26:915729372
31284 @ 19	 ==> execution time 26:995877168

N=1000000000 ; for NPROC in 1 2 3 4 5 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done
31636 @ 0	 ==> execution time 18:804222157
31640 @ 1	 ==> execution time 19:501420760
31658 @ 1	 ==> execution time 20:657524560
31634 @ 1	 ==> execution time 20:814068666
31648 @ 0	 ==> execution time 24:936846447
31660 @ 0	 ==> execution time 25:193719778
31654 @ 0	 ==> execution time 25:510585576
31642 @ 0	 ==> execution time 25:530504029
31652 @ 1	 ==> execution time 25:975794455
31646 @ 1	 ==> execution time 26:165684663
31632 @ 19	 ==> execution time 28:938437090
31650 @ 19	 ==> execution time 28:954028063
31638 @ 19	 ==> execution time 33:65970826
31656 @ 19	 ==> execution time 33:646365534
31644 @ 19	 ==> execution time 33:838210078
--------------------

pf


More information about the Vm-dev mailing list