Ok, so following a bit on this. I'll summarize some of our findings, some of them maybe obvious for some people in this list.
I saw that there are actually two different kind of VMs for *nix [1] : - threaded heartbeat - itimer + signal heartbeat
Specially, I'd like to cite the following paragraph for the lazy:
A distinction on linux is between VMs with an itimer hearbeat or a threaded heartbeat. VMs with an itimer hearbeat use setitimer to deliver a SIGALRM signal at regular intervals to interrupt the VM to check for events. These signals can be troublesome, interrupting foreign code that cannot cope with such signals. VMs with a threaded heartbeat use a high-priority thread that loops, blocking on nanosleep and then interrupting the VM, performing the same function as the itimer heartbeat but without using signals. These VMs are to be preferred but suport for multiple thread priorities in user-level processes has only been available on linux in kernels later than 2.6.12.
So, I downloaded the heartbeat squeak VM from bintray [2]. This VM requires so deploy some configuration files in /etc/security [2].
Under this configuration OSSubprocess worked like a charm (or I did not found the issue again so far).
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
Cheers, Guille
[1] https://github.com/OpenSmalltalk/opensmalltalk-vm/blob/e17db79411cfec767e04f... [2] https://bintray.com/opensmalltalk/vm/download_file?file_path=cog_linux32x86_... [3] https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux
-------- Original Message --------
I asked Santa an OSSubprocess that won't hang :)
I really appreciate to see such not so small polar elfves helping santa :)
Stef
Thanks,
Guille
-------- Original Message --------
On 19 Dec 2016, at 14:41, Mariano Martinez Peck marianopeck@gmail.com wrote:
Hi guys,
Guille Polito kept one of these images if someone can give us a hand. He also proposed the great idea of using `strace` to see what was going on. He (together with Pable Tesone) suspected that the heartbeat could be interrupting the `clone()` function which is (I think) called internally by the 'posix_spawn()' which is the one used by OSSubprocess.
When these images are "hung" they found at a infinitive loop like this:
Okay, how many child processes do you have at that point? How many processes does the system have?
[pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
[pid 17477] gettimeofday({1482152630, 593498}, NULL) = 0
[pid 17477] sigreturn() (mask []) = 120
[pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted)
[pid 17477] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
[pid 17477] gettimeofday({1482152630, 600126}, NULL) = 0
[pid 17477] sigreturn() (mask []) = 120
[pid 17477] clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xf7578768) = ? ERESTARTNOINTR (To be restarted)
so above.. 7ms between the two gettimeofday calls? Nothing else? Set a breakpoint on clone/fork in gdb and look at the c-stack at this point? Could you strace with timestamps to see how much time is spent? Is the process suspicious in other ways?
So yes.. sounds like clone doesn't complete.. the question is why? Is it out of resources? Is something in the VM blocking longer than the heartbeat, is the heartbeat more frequent than expected?
As you can see, there is a SIGALARM involved. It also looks like the `gettimeofday` is used by the heartbeat ? Could it be that someone the heartbeat is interrupting the `clone()` ?
Guille also showed me the `strace` output with a regular / working image:
[pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
[pid 18647] gettimeofday({1482152829, 481014}, NULL) = 0
[pid 18647] sigreturn() (mask []) = -1 EINTR (Interrupted system call)
[pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 1917}}) = 0
[pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable)
[pid 18647] select(4, [3], [], [3], {0, 1000}) = 0 (Timeout)
[pid 18647] getitimer(ITIMER_REAL, {it_interval={0, 2000}, it_value={0, 797}}) = 0
[pid 18647] recvmsg(3, 0xff7b0734, 0) = -1 EAGAIN (Resource temporarily unavailable)
[pid 18647] select(4, [3], [], [3], {0, 1000}) = ? ERESTARTNOHAND (To be restarted if no handler)
[pid 18647] --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
Does anyone have any hint here?
Get timestamps in there. How long does it take to fail/end in this situation?
holger
On 21 Dec 2016, at 22:29, Guille Polito guillermopolito@gmail.com wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
Also in FreeBSD, vm with heartbeat thread can't be run in jails (secured chroot sandboxes) even as root and also on main FreeBSD system, vm can't be run as normal user, only as root :( I did not find the solution, how to allow real time priority threads for vm process on FreeBSD. I tested linux vm (pharo, oscog) with linux compatibilty layer on FreeBSD + CentOS 7 userland (I am not able to compile native vm under FreeBSD).
pf
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
On 24 Feb 2017, at 13:56, Petr Fischer petr.fischer@me.com wrote:
Also in FreeBSD, vm with heartbeat thread can't be run in jails (secured chroot sandboxes) even as root and also on main FreeBSD system, vm can't be run as normal user, only as root :( I did not find the solution, how to allow real time priority threads for vm process on FreeBSD. I tested linux vm (pharo, oscog) with linux compatibilty layer on FreeBSD + CentOS 7 userland (I am not able to compile native vm under FreeBSD).
What are the link failures for you? backtrace and iconv_open and later ft2build.h/freetype?
make LIBS+="-lexecinfo -lutil -lpthread -lm -liconv" LDFLAGS+=-L/usr/local/lib and with some modifications to FT2Plugin/Makefile I get the rest to compile too.
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther holger@freyther.de wrote:
On 21 Dec 2016, at 22:29, Guille Polito guillermopolito@gmail.com wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6...
cheers -ben
On 29 Mar 2017, at 02:02, Ben Coman btc@openInWorld.com wrote:
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther holger@freyther.de wrote:
On 21 Dec 2016, at 22:29, Guille Polito guillermopolito@gmail.com wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61ffaff6decc2a3e4b6ca5e
I think this change restores a cleanup introduced by Eliot (nuke all those #define and change it by exeName).
Esteban
https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6...
cheers -ben
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther holger@freyther.de wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6...
cheers -ben
Are these commits immediatelly available in pharo-vm GitHub repository, or I need to compile directly from opensmalltalk-vm?
Is it really safe to run VM with normal priority heartbeat?
Thanks! pf
On Fri, Mar 31, 2017 at 7:17 AM, Petr Fischer petr.fischer@me.com wrote:
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther holger@freyther.de
wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README
file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is
migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin,
maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/
32f321583c69ca27e61ffaff6decc2a3e4b6ca5e
5418a415e9297f601f6d57ee732fd7fd942da08c
cheers -ben
Are these commits immediatelly available in pharo-vm GitHub repository, or I need to compile directly from opensmalltalk-vm?
I'm not clear on the process Esteban uses to keep pharo-vm up to date with opensmalltalk-vm, but I believe its in. Its easy to test. Just download latest VM and run without the limits conf.
Is it really safe to run VM with normal priority heartbeat?
Of course user testing is required, but considering how Linux operates with dynamic thread priorities I believe it is safe.
cheers -ben
On 31 Mar 2017, at 03:43, Ben Coman btc@openInWorld.com wrote:
On Fri, Mar 31, 2017 at 7:17 AM, Petr Fischer <petr.fischer@me.com mailto:petr.fischer@me.com> wrote:
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther <holger@freyther.de mailto:holger@freyther.de> wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin, maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/32f321583c69ca27e61ffaff6decc2a3e4b6ca5e https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/5418a415e9297f601f6d57ee732fd7fd942da08c
cheers -ben
Are these commits immediatelly available in pharo-vm GitHub repository, or I need to compile directly from opensmalltalk-vm?
I'm not clear on the process Esteban uses to keep pharo-vm up to date with opensmalltalk-vm, but I believe its in. Its easy to test. Just download latest VM and run without the limits conf.
there is a mirror that runs every hour… but I just publish not failing builds. anyway, this will change again soon (™)… to commit also generated sources (I commit packages and let the travis ci to generate the sources now).
Esteban
Is it really safe to run VM with normal priority heartbeat?
Of course user testing is required, but considering how Linux operates with dynamic thread priorities I believe it is safe.
cheers -ben
On Thu, Mar 30, 2017 at 4:17 PM, Petr Fischer petr.fischer@me.com wrote:
On Fri, Feb 24, 2017 at 11:57 AM, Holger Freyther holger@freyther.de
wrote:
Hi!
Now, this heartbeat threaded VM is the recommended in the README
file, and we see that OSSubprocess generates the exact issue stated. The main problem remains for the moment since Pharo's default download includes not this VM but the itimer one. I talked with Esteban about it and he was aware of these two VM flavours, and the reason why we are using the itimer one is the need to deploy those permission files in /etc, which makes installation a bit less automatic.
It is not a matter of configuration, e.g. more infrastructure is
migrated into docker containers (I run the CI with docker containers) and even as root inside the container you might not be allowed to use realtime priority. And you might not be able to influence how your container is being executed.
holger
PS: Have you seen the fix David Lewis made to the UnixProcess plugin,
maybe it already helps with your bug as well?
If these two commits work out, the threaded VM may become a viable default option... https://github.com/OpenSmalltalk/opensmalltalk-vm/commit/
32f321583c69ca27e61ffaff6decc2a3e4b6ca5e
5418a415e9297f601f6d57ee732fd7fd942da08c
cheers -ben
Are these commits immediatelly available in pharo-vm GitHub repository, or I need to compile directly from opensmalltalk-vm?
Is it really safe to run VM with normal priority heartbeat?
I believe not. Do something lie spawn a process waiting on a delay that will interrupt the active process after some time (say 1 second). Then have the active process enter a tight loop (e.g. a repeat loop that sends class to an object). It is very important that the loop /not/ cause a GC or a deep call chain so that it doesn't incidentally poll for events and hence update the clock. Once the active process is in a tight loop the delay is effectively disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
Thanks! pf
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread.
Levente
Hi Levente,
On Mar 31, 2017, at 6:42 AM, Levente Uzonyi leves@caesar.elte.hu wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread.
It did happen. In 2009 Andreas and I demonstrated it on Linux. Yes it may be the case that certain Linux kernel versions fix the issue but the only reliable way is to have the heartbeat thread run at a higher priority than the Smalltalk thread.
Levente
Hi Eliot,
On Fri, 31 Mar 2017, Eliot Miranda wrote:
Hi Levente,
On Mar 31, 2017, at 6:42 AM, Levente Uzonyi leves@caesar.elte.hu wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread.
It did happen. In 2009 Andreas and I demonstrated it on Linux. Yes it may be the case that certain Linux kernel versions fix the issue but the only reliable way is to have the heartbeat thread run at a higher priority than the Smalltalk thread.
Back in 2009 neither of those schedulers were released. Of course, one can compile a kernel with a cooperative scheduler, but the average user is unlikely to ever use such kernel.
Levente
Levente
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat thread and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS, BFS) on linux is not cooperative. So, the kernel will periodically preempt the main thread and run the heartbeat thread no matter what their priorities are. The higher priority only provides lower jitter on the heartbeat thread.
Levente
Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). I need to test it under FreeBSD.
Thanks! pf
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer petr.fischer@me.com wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat
thread
and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS,
BFS) on
linux is not cooperative. So, the kernel will periodically preempt the
main
thread and run the heartbeat thread no matter what their priorities are.
The
higher priority only provides lower jitter on the heartbeat thread.
Levente
Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). I need to test it under FreeBSD.
Thanks! pf
Just for starters, what result do you get for my multi-priority fibonacci stress test... http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html
cheers -ben
On Sun, Apr 2, 2017 at 8:10 AM, Ben Coman btc@openinworld.com wrote:
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer petr.fischer@me.com wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat
thread
and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS,
BFS) on
linux is not cooperative. So, the kernel will periodically preempt the
main
thread and run the heartbeat thread no matter what their priorities
are. The
higher priority only provides lower jitter on the heartbeat thread.
Levente
Is there some test case or code, that I can run in Pharo and evaluate if
kernel sheduler is working correctly (with heartbeat thread at normal priority).
I need to test it under FreeBSD.
Thanks! pf
Just for starters, what result do you get for my multi-priority fibonacci
stress test...
http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html
cheers -ben
I got curious to read up on the FreeBSD scheduler.
FreeBSD has the same constraint as Linux such that "Only the super-user may lower priorities." https://www.freebsd.org/cgi/man.cgi?query=setpriority&sektion=2
From https://classes.cs.uoregon.edu/13F/cis607distcomp/PPT/FreeBSDscheduler(McKay... Each CPU has (a KSeq) three arrays of run queue indexed by priority * The Current queue receives interactive, real time and interrupt threads * The Next queue receives everything else except idle threads * When Current queue is empty, the two queues swap. * The third queue hold idle threads, and is only used when there are no other runnable threads
ULE: A Modern Scheduler For FreeBSD
A thread is assigned to a queue until it sleeps, or for the duration of a slice. The base priority, slice size, and interactivity score are recalculated each time a slice expires. The thread is assigned to the Current queue if it is interactive or to the Next queue otherwise. Inserting interactive tasks onto the Current queue and giving them a higher priority results in a very low latency response.
In ULE the interactivity of a thread is determined using its voluntary sleep time and run time. The voluntary sleep time is recorded by counting the number of ticks that have passed between a sleep() and wakeup() or while sleeping on a condition variable. The run time is simply the number of ticks while the thread is running. The scheduler uses the interactivity score to determine whether or not a thread should be assigned to the Current queue when it becomes runnable.
On x86, FreeBSD has a default HZ of 100, and a minimum slice value of 10ms and maximum slice value of 140ms. Interactive tasks receive the minimum slice value. This allows us to more quickly discover that an interactive task is no longer interactive.
From http://ptgmedia.pearsoncmg.com/images/9780321968975/samplepages/978032196897... The Design and Implementation of the FreeBSD Operating System
The scheduling policy initially assigns a high execution priority to each thread and allows that thread to execute for a fixed time slice. Threads that execute for the duration of their slice have their priority lowered, whereas threads that give up the CPU (usually because they do I/O) are allowed to remain at their priority. Threads that are inactive have their priority raised.
Some tasks, such as the compilation of a large application, may be done in many small steps in which each component is compiled in a separate process. No individual step runs long enough to have its priority degraded, so the compilation as a whole impacts the interactive programs. To detect and avoid this problem, the scheduling priority of a child process is propagated back to its parent. When a new child process is started, it begins running with its parent’s current priority. As the program that coordinates the compilation (typically make) starts many compilation steps, its priority is dropped because of the CPU-intensive behavior of its children. Later compilation steps started by make begin running and stay at a lower priority, which allows higher-priority interactive programs to run in preference to them as desired.
Resuming a thread ... If any threads are placed on the run queue and one of them has a scheduling priority higher than that of the currently executing thread, it will request that the CPU be rescheduled as soon as possible. Real-time and interrupt threads do preempt lower-priority threads. The kernel can be configured to preempt timeshare threads executing in the kernel with other higher-priority timeshare threads. This option is not used by default as the increase in context switches adds overhead and does not help make timeshare threads response time more predictable
From https://github.com/freebsd/freebsd/blame/master/sys/kern/sched_ule.c
and substituting defined constants... PRIO_MIN -20 PRIO_MAX 20 SCHED_INTERACT_THRESH 30 SCHED_INTERACT_HALF 50 = (SCHED_INTERACT_MAX / 2) SCHED_INTERACT_MAX 100 PRI_MIN_TIMESHARE 120 PRI_MAX_TIMESHARE 223 = (PRI_MIN_IDLE - 1) PRI_MIN_IDLE 224
SCHED_PRI_NRESV 40 = (PRIO_MAX - PRIO_MIN) PRI_TIMESHARE_RANGE 104 = (PRI_MAX_TIMESHARE - PRI_MIN_TIMESHARE + 1) PRI_INTERACT_RANGE 32 = ((PRI_TIMESHARE_RANGE - SCHED_PRI_NRESV) / 2)
PRI_MIN_INTERACT 120 = (PRI_MIN_TIMESHARE) PRI_MAX_INTERACT 153 = (120 + PRI_INTERACT_RANGE - 1)
PRI_MIN_BATCH 152 = (PRI_MIN_TIMESHARE + PRI_INTERACT_RANGE) PRI_MAX_BATCH 223 = (PRI_MAX_TIMESHARE) SCHED_PRI_NHALF 20 = (SCHED_PRI_NRESV / 2) SCHED_PRI_MIN 172 = (PRI_MIN_BATCH + SCHED_PRI_NHALF) SCHED_PRI_MAX 203 = (PRI_MAX_BATCH - SCHED_PRI_NHALF) SCHED_PRI_RANGE 30 = (SCHED_PRI_MAX - SCHED_PRI_MIN + 1)
sched_interact 30 = (SCHED_INTERACT_THRESH)
sched_interact_score() if (sleep/run)>1, interact_score = 50 / (sleep/run) if (sleep/run)=1, interact_score = 50 if (sleep/run)<1, interact_score = 50 * (2 - (sleep/run))
sched_priority() * If the score is interactive we place the thread in the realtime * queue with a priority that is less than kernel and interrupt * priorities. These threads are not subject to nice restrictions. * * Scores greater than this are placed on the normal timeshare queue * where the priority is partially decided by the most recent cpu * utilization and the rest is decided by nice value. * * The nice value of the process has a linear effect on the calculated * score. Negative nice values make it easier for a thread to be * considered interactive. Default nice is 0. * score = sched_interact_score() + nice); if (score < (30)) priority = 120 + score*34/30 // = 120 + (153 - 120 + 1) / 30 * score else priority = 201 + nice // = 172 + 30 - 1 + nice
sched_add(struct thread *td, int flags) * Select the target thread queue and add a thread to it. * Request preemption or IPI a remote processor if required. * Recalculate the priority before we select the target cpu or run-queue. * if (PRI_BASE(td->td_pri_class) == PRI_TIMESHARE) sched_priority(td); ...
So it seems as long as "sleep/run > 2" then it seems FreeBSD heatbeat-thread will get an interactive priority bump.
cheers -ben
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer petr.fischer@me.com wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat
thread
and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS,
BFS) on
linux is not cooperative. So, the kernel will periodically preempt the
main
thread and run the heartbeat thread no matter what their priorities are.
The
higher priority only provides lower jitter on the heartbeat thread.
Levente
Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). I need to test it under FreeBSD.
Thanks! pf
Just for starters, what result do you get for my multi-priority fibonacci stress test... http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html cheers -ben
Output from your test C program (my laptop was cca 30% loaded with other processes [sha256 summing + disk IO] during test runs):
-------------------- Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz No. of Cores: 4
N=1000000000 ; for NPROC in 1 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 29971 @ 19 ==> execution time 7:13689552 29973 @ 1 ==> execution time 7:329530217 29975 @ 0 ==> execution time 7:372957715
N=1000000000 ; for NPROC in 1 2 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 30354 @ 0 ==> execution time 8:503484639 30360 @ 0 ==> execution time 8:825420373 30350 @ 19 ==> execution time 10:419283219 30358 @ 1 ==> execution time 12:449180718 30352 @ 1 ==> execution time 12:607924228 30356 @ 19 ==> execution time 14:783309741
N=1000000000 ; for NPROC in 1 2 3 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31041 @ 0 ==> execution time 8:469494752 31045 @ 1 ==> execution time 12:300027654 31049 @ 19 ==> execution time 15:669175038 31047 @ 0 ==> execution time 16:625971781 31053 @ 0 ==> execution time 17:291503470 31039 @ 1 ==> execution time 17:532778854 31051 @ 1 ==> execution time 18:99766937 31037 @ 19 ==> execution time 20:579350446 31043 @ 19 ==> execution time 21:23873011
N=1000000000 ; for NPROC in 1 2 3 4 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31294 @ 0 ==> execution time 10:22735629 31288 @ 0 ==> execution time 17:412909550 31304 @ 1 ==> execution time 17:938611595 31292 @ 1 ==> execution time 17:950983991 31300 @ 0 ==> execution time 18:652930065 31298 @ 1 ==> execution time 21:613235002 31306 @ 0 ==> execution time 22:996656086 31286 @ 1 ==> execution time 23:865226663 31302 @ 19 ==> execution time 25:605616888 31296 @ 19 ==> execution time 25:825197854 31290 @ 19 ==> execution time 26:915729372 31284 @ 19 ==> execution time 26:995877168
N=1000000000 ; for NPROC in 1 2 3 4 5 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 31636 @ 0 ==> execution time 18:804222157 31640 @ 1 ==> execution time 19:501420760 31658 @ 1 ==> execution time 20:657524560 31634 @ 1 ==> execution time 20:814068666 31648 @ 0 ==> execution time 24:936846447 31660 @ 0 ==> execution time 25:193719778 31654 @ 0 ==> execution time 25:510585576 31642 @ 0 ==> execution time 25:530504029 31652 @ 1 ==> execution time 25:975794455 31646 @ 1 ==> execution time 26:165684663 31632 @ 19 ==> execution time 28:938437090 31650 @ 19 ==> execution time 28:954028063 31638 @ 19 ==> execution time 33:65970826 31656 @ 19 ==> execution time 33:646365534 31644 @ 19 ==> execution time 33:838210078 --------------------
pf
On 2. Apr 2017, at 19:13, Petr Fischer petr.fischer@me.com wrote:
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer petr.fischer@me.com wrote:
I don't find Ben's original mail.. so let me reply here.
Output from your test C program (my laptop was cca 30% loaded with other processes [sha256 summing + disk IO] during test runs):
On FreeBSD you want to use cpuset to bind the pharo-vm to a specific CPU core and on Linux you can do that with taskset. This way the heartbeat thread and Smalltalk execution will always compete for the same core.
holger
On Sun, Apr 2, 2017 at 3:11 AM, Petr Fischer petr.fischer@me.com wrote:
On Thu, 30 Mar 2017, Eliot Miranda wrote:
Once the active process is in a tight loop the delay is effectively
disabled because the tight loop effectively shuts out the heartbeat
thread
and hence the system never notices that the delay has expired.
I think that won't happen, because the process scheduler (O(1), CFS,
BFS) on
linux is not cooperative. So, the kernel will periodically preempt the
main
thread and run the heartbeat thread no matter what their priorities are.
The
higher priority only provides lower jitter on the heartbeat thread.
Levente
Is there some test case or code, that I can run in Pharo and evaluate if kernel sheduler is working correctly (with heartbeat thread at normal priority). I need to test it under FreeBSD.
Thanks! pf
Just for starters, what result do you get for my multi-priority fibonacci stress test... http://forum.world.st/Unix-heartbeat-thread-vs-itimer-tp4928943p4938456.html cheers -ben
And results from inside FreeBSD JAIL (something like chrooted secured sandbox or Solaris zone), test laptop also loaded with other processes like in test before):
---------------------------- Intel(R) Core(TM) i5-4210U CPU @ 1.70GHz No. of Cores: 4 INSIDE JAIL
N=1000000000 ; for NPROC in 1 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 34160 @ 19 ==> execution time 6:658293335 34164 @ 0 ==> execution time 7:344659449 34162 @ 1 ==> execution time 7:349528844
N=1000000000 ; for NPROC in 1 2 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 34494 @ 1 ==> execution time 9:103074882 34488 @ 1 ==> execution time 9:377005820 34485 @ 19 ==> execution time 10:169357290 34490 @ 0 ==> execution time 12:431283848 34496 @ 0 ==> execution time 12:435207151 34492 @ 19 ==> execution time 15:141234946
N=1000000000 ; for NPROC in 1 2 3 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 34687 @ 1 ==> execution time 8:609114439 34683 @ 0 ==> execution time 10:558500185 34695 @ 0 ==> execution time 10:595350437 34693 @ 1 ==> execution time 14:831554545 34689 @ 0 ==> execution time 17:83945157 34681 @ 1 ==> execution time 18:87479507 34691 @ 19 ==> execution time 19:826694016 34685 @ 19 ==> execution time 20:95071917 34679 @ 19 ==> execution time 20:703022751
N=1000000000 ; for NPROC in 1 2 3 4 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 34891 @ 0 ==> execution time 9:999782187 34905 @ 19 ==> execution time 17:881143544 34909 @ 0 ==> execution time 17:959943746 34903 @ 0 ==> execution time 18:322021733 34889 @ 1 ==> execution time 19:808572223 34895 @ 1 ==> execution time 20:66256169 34907 @ 1 ==> execution time 20:779405952 34897 @ 0 ==> execution time 23:65989500 34901 @ 1 ==> execution time 23:930198240 34899 @ 19 ==> execution time 26:775605581 34887 @ 19 ==> execution time 26:859268875 34893 @ 19 ==> execution time 27:160633495
N=1000000000 ; for NPROC in 1 2 3 4 5 ; do (./a.out 19 $N &) && (./a.out 1 $N &) && (./a.out 0 $N &) ; done 35136 @ 0 ==> execution time 9:721558687 35156 @ 19 ==> execution time 18:551098074 35134 @ 1 ==> execution time 18:642816376 35152 @ 1 ==> execution time 18:784739098 35146 @ 1 ==> execution time 19:845043925 35140 @ 1 ==> execution time 19:936571188 35160 @ 0 ==> execution time 23:351587301 35142 @ 0 ==> execution time 27:74930475 35132 @ 19 ==> execution time 27:627105815 35150 @ 19 ==> execution time 31:576758248 35148 @ 0 ==> execution time 32:178976338 35144 @ 19 ==> execution time 32:313815914 35154 @ 0 ==> execution time 32:713757378 35158 @ 1 ==> execution time 33:128044152 35138 @ 19 ==> execution time 33:697034678 ----------------------------
vm-dev@lists.squeakfoundation.org