Back trace:<div><div>#0 delay_tsc (loops=2127949) at arch/x86/lib/delay.c:69</div><div>#1 0xffffffff8127fcef in __delay (loops=<optimized out>) at arch/x86/lib/delay.c:112</div><div>#2 0xffffffff8127fd2c in __const_udelay (xloops=<optimized out>) at arch/x86/lib/delay.c:126</div>
<div>#3 0xffffffff814a2e93 in panic (fmt=<optimized out>) at kernel/panic.c:152</div><div>#4 0xffffffff81005541 in oops_end (flags=70, regs=<optimized out>, signr=11) at arch/x86/kernel/dumpstack.c:243</div>
<div>#5 0xffffffff810056a8 in die (str=0xffffffff815ef64a "invalid opcode", regs=0xffff88003f003d28, err=0)</div><div> at arch/x86/kernel/dumpstack.c:305</div><div>#6 0xffffffff81002044 in do_trap (trapnr=6, signr=4, str=0xffffffff815ef64a "invalid opcode", </div>
<div> regs=0xffff88003f003d28, error_code=<optimized out>, info=0xffff88003f003c88) at arch/x86/kernel/traps.c:177</div><div>#7 0xffffffff81002355 in do_invalid_op (regs=0xffff88003f003d28, error_code=0) at arch/x86/kernel/traps.c:218</div>
<div>#8 <signal handler called></div><div>#9 0xffffffff8126382d in sched_state_will_schedule (tsk=0xffff880039d40fa0) at litmus/preempt.c:33</div><div>#10 0xffffffff810268d4 in set_tsk_need_resched (tsk=0xffff880039d40fa0) at include/linux/sched.h:2461</div>
<div>#11 resched_task (p=<optimized out>) at kernel/sched.c:1189</div><div>#12 0xffffffff8102cdb7 in sched_rt_rq_enqueue (rt_rq=<optimized out>) at kernel/sched_rt.c:326</div><div>#13 do_sched_rt_period_timer (overrun=1, rt_b=<optimized out>) at kernel/sched_rt.c:598</div>
<div>#14 sched_rt_period_timer (timer=0xffffffff81838ba8) at kernel/sched.c:177</div><div>#15 0xffffffff81061973 in __run_hrtimer (timer=0xffffffff81838ba8, now=0xffff88003f003f50) at kernel/hrtimer.c:1310</div><div>#16 0xffffffff81062a8b in hrtimer_interrupt (dev=<optimized out>) at kernel/hrtimer.c:1398</div>
<div>#17 0xffffffff814ad996 in local_apic_timer_interrupt () at arch/x86/kernel/apic/apic.c:834</div><div>#18 smp_apic_timer_interrupt (regs=<optimized out>) at arch/x86/kernel/apic/apic.c:861</div><div>#19 <signal handler called></div>
<div>#20 0x00cf9b000000ffff in ?? ()</div></div><div><br></div><div>I have also confirmed that this only happens when shed-trace is running (not overhead tracing, just scheduling). Every commit I checked back to the 3.0 merge where my sched-trace still ran had this same issue.</div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Nov 29, 2012 at 12:42 PM, Björn Brandenburg <span dir="ltr"><<a href="mailto:bbb@mpi-sws.org" target="_blank">bbb@mpi-sws.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
On Nov 29, 2012, at 5:52 PM, Jonathan Herman <<a href="mailto:hermanjl@cs.unc.edu">hermanjl@cs.unc.edu</a>> wrote:<br>
<br>
> What is the following line defending against:<br>
><br>
> litmus/preempt.c:<br>
> 31│ /* Litmus tasks should never be subject to a remote<br>
> 32│ * set_tsk_need_resched(). */<br>
> 33│ BUG_ON(is_realtime(tsk));<br>
> 34│ //TRACE_TASK(tsk, "SUPERBAD"); /* I added this */<br>
<br>
</div>It defends against misuse of set_tsk_need_resched(). You can't safely use set_tsk_need_resched() for non-local tasks without acquiring the task's corresponding runqueue lock.<br>
<div class="im"><br>
><br>
> I keep hitting this when I test with a full schedule under GSN-EDF. Oddly, when I debug using gdb and view t->comm it is "rtspin", but this is not the case in the trace log. If I remove the BUG_ON so that TRACE_TASK is hit, I git the following lines:<br>
> 158079 P0 [sched_state_will_schedule@litmus/preempt.c:34]: (kworker/0:0/0:0) SUPERBAD<br>
> 158080 P0 [sched_state_will_schedule@litmus/preempt.c:37]: (kworker/0:0/0:0) set_tsk_need_resched() ret:ffffffff810268d4<br>
><br>
<br>
</div>What is the symbolic name of ret:ffffffff810268d4? You might want to look at __builtin_return_address(1) instead. Do you have a backtrace?<br>
<div class="im"><br>
> Which, as far as I can tell, is only possible if tsk->comm == "kworker/0:0". But then why is there no pid? This is on the current staging of liblitmus and litmus-rt.<br>
<br>
</div>Is it reproducible? Can you bisect the recent patches to see where it crept in?<br>
<div class="im"><br>
><br>
><br>
> Ideas? I'm assuming race condition, as Glenn suggested, because usually if it's insane, its a race.<br>
<br>
</div>Looks indeed quite strange. What happens just before this bug? A context switch? A migration? Nonsensical races / panics can also be an indicator of stack corruption.<br>
<div class="HOEnZb"><div class="h5"><br>
Thanks,<br>
Björn<br>
<br>
<br>
<br>
_______________________________________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Jonathan Herman<br>Department of Computer Science at UNC Chapel Hill<br>
</div>