<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Sep 15, 2012, at 2:34 PM, Glenn Elliott <<a href="mailto:gelliott@cs.unc.edu">gelliott@cs.unc.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html charset=windows-1252"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Sep 15, 2012, at 12:17 PM, Glenn Elliott <<a href="mailto:gelliott@cs.unc.edu">gelliott@cs.unc.edu</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Sep 14, 2012, at 8:21 PM, Jonathan Herman <<a href="mailto:hermanjl@cs.unc.edu">hermanjl@cs.unc.edu</a>> wrote:</div><div><br></div><blockquote type="cite"><br>On Fri, Sep 14, 2012 at 7:57 PM, Glenn Elliott <<a href="mailto:gelliott@cs.unc.edu">gelliott@cs.unc.edu</a>> wrote:<br><blockquote type="cite">Well, I am stuck in a strange case where a task is linked but never<br>scheduled (G-EDF):<br><br>3903 P0 [check_for_preemptions@litmus/sched_gsn_edf.c:365]:<br>check_for_preemptions: attempting to link task 1795 to 0<br>3904 P0 [gsnedf_get_nearest_available_cpu@litmus/sched_gsn_edf.c:347]: Could<br>not find an available CPU close to P0<br>3905 P0 [__add_ready@litmus/rt_domain.c:312]: rt: adding aux_threads/1799<br>(0, 4611686018427387903, 4611686018427387903) [inh_task: (nil)/0 (0, 0 0)]<br>rel=59812157846 to ready queue at 60312058176<br>3907 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:218]: (aux_threads/1799:1)<br>linked = aux_threads/1795<br>3908 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:219]: (aux_threads/1799:1)<br>entry->linked = aux_threads/1799<br>3909 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:261]: (aux_threads/1795:2)<br>linked to 0.<br>3914 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1796:1) tick<br>3915 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1797:1) tick<br>3916 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1798:1) tick<br>3917 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1799:1) tick<br>3918 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1796:1) tick<br>3919 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1797:1) tick<br>3920 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1798:1) tick<br>3921 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1799:1) tick<br>3922 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1796:1) tick<br>3923 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1797:1) tick<br>3924 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1798:1) tick<br>3925 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1799:1) tick<br>3926 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1796:1) tick<br>3927 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1797:1) tick<br>3928 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:472]: (aux_threads/1798:1) tick<br><br><br>I updated gsnedf_tick to print the currently scheduled real-time task. Task<br>1795 should be running on P0 instead of 1799.<br><br>Has anyone ever seen something like this before?<br><br>Thanks,<br>Glenn<br><br>p.s. Ignore the crazy period/relative_deadline for 1799. Those are just<br>place holders. 1799 is really just a worker thread that has a statically<br>low priority and should only run when the system is idle (or according to an<br>inherited priority).<br><br>p.p.s. I think there is a bug in litmus.c::litmus_fork(). is_realtime() is<br>returning false for forked children of a real-time task (expected), so the<br>fork-copied rt_param state is not reset. Thus, the child gets the same<br>rt_params as the parent. This is probably harmless if we're sure to re-init<br>any children that transition to real-time. However, for the sake of<br>completeness, I wonder if we should execute "reinit_litmus_state(p,0);" for<br>all children. I am doing some where tasks are forced to become real-time<br>from within the kernel (worker threads), and this possible bug bit me.<br><br>_______________________________________________<br>litmus-dev mailing list<br><a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br><a href="https://lists.litmus-rt.org/listinfo/litmus-dev">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br><br></blockquote><br>I would bet cashmoney that your task is not preemptable. Your</blockquote><blockquote type="cite">check_for_preempt calls preempt()->preempt_if_preemptable after<br>linking the task, which calls litmus_reschedule if the task is not<br>preemptable. litmus_reschedule ALWAYS hits line 98 of your preempt.c<br>and traces state.</blockquote><blockquote type="cite"><br>-- <br>Jonathan Herman<br>Department of Computer Science at UNC Chapel Hill<br></blockquote><br></div><div><br></div><div><div>I think something else could be going wrong. I believe the task is preemptible. litmus_reschedule() is indeed being called. Here's the code, with some trace statements added in. I've put the code path that is being followed in bold.</div><div><br></div><div><div> 66 void litmus_reschedule(int cpu)</div><div> 67 {</div><div> 68 int picked_transition_ok = 0;</div><div> 69 int scheduled_transition_ok = 0;</div><div> 70 </div><div> 71 /* The (remote) CPU could be in any state. */</div><div> 72 </div><div> 73 /* The critical states are TASK_PICKED and TASK_SCHEDULED, as the CPU</div><div> 74 * is not aware of the need to reschedule at this point. */</div><div> 75 </div><div> 76 /* is a context switch in progress? */</div><div><b> 77 if (cpu_is_in_sched_state(cpu, TASK_PICKED)) {</b></div><div><b> 78 picked_transition_ok = sched_state_transition_on(</b></div><div><b> 79 cpu, TASK_PICKED, PICKED_WRONG_TASK);</b></div><div><b> 80 </b></div><div><b> 81 TRACE_CUR("cpu %d: picked_transition_ok = %d\n", cpu, picked_transition_ok);</b></div><div> 82 }</div><div> 83 else {</div><div> 84 TRACE_CUR("cpu %d: picked_transition_ok = 0 (static)\n", cpu);</div><div> 85 }</div><div> 86 </div><div> 87 if (!picked_transition_ok &&</div><div> 88 cpu_is_in_sched_state(cpu, TASK_SCHEDULED)) {</div><div> 89 /* We either raced with the end of the context switch, or the</div><div> 90 * CPU was in TASK_SCHEDULED anyway. */</div><div> 91 scheduled_transition_ok = sched_state_transition_on(</div><div> 92 cpu, TASK_SCHEDULED, SHOULD_SCHEDULE);</div><div> 93 TRACE_CUR("cpu %d: scheduled_transition_ok = %d\n", cpu, scheduled_transition_ok);</div><div> 94 }</div><div><b> 95 else {</b></div><div><b> 96 TRACE_CUR("cpu %d: scheduled_transition_ok = 0 (static)\n", cpu);</b></div><div><b> 97 }</b></div><div> 98 </div><div> 99 /* If the CPU was in state TASK_SCHEDULED, then we need to cause the</div><div>100 * scheduler to be invoked. */</div><div>101 if (scheduled_transition_ok) {</div><div>102 if (smp_processor_id() == cpu) {</div><div>103 set_tsk_need_resched(current);</div><div>104 }</div><div>105 else {</div><div>106 smp_send_reschedule(cpu);</div><div>107 }</div><div>108 }</div><div>109 </div><div>110 TRACE_STATE("%s picked-ok:%d sched-ok:%d\n",</div><div>111 __FUNCTION__,</div><div>112 picked_transition_ok,</div><div>113 scheduled_transition_ok);</div><div>114 }</div><div><br></div></div><div>Trace output:</div><div><div>3956 P1 [link_task_to_cpu@litmus/sched_gsn_edf.c:218]: (aux_threads/1820:1) linked = aux_threads/1816</div><div>3957 P1 [link_task_to_cpu@litmus/sched_gsn_edf.c:219]: (aux_threads/1820:1) entry->linked = aux_threads/1820</div><div>3958 P1 [link_task_to_cpu@litmus/sched_gsn_edf.c:261]: (aux_threads/1816:2) linked to 1. </div><div>3962 P1 [preempt_if_preemptable@litmus/sched_plugin.c:36]: (aux_threads/1820:1) preempt_if_preemptable: aux_threads/1820</div><div>3963 P1 [preempt_if_preemptable@litmus/sched_plugin.c:45]: (aux_threads/1820:1) preempt local cpu.</div><div>3964 P1 [preempt_if_preemptable@litmus/sched_plugin.c:71]: (aux_threads/1820:1) calling litmus_reschedule()</div><div><b>3965 P1 [litmus_reschedule@litmus/preempt.c:81]: (aux_threads/1820:1) cpu 1: picked_transition_ok = 1</b></div><div><b>3966 P1 [litmus_reschedule@litmus/preempt.c:96]: (aux_threads/1820:1) cpu 1: scheduled_transition_ok = 0 (static)</b></div><div>3968 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1820:1) tick 67358153479</div><div>3972 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1820:1) tick 67359153359</div><div><br></div></div><div><br></div><div><br></div><div>So it looks like my linked task is not scheduled because the CPU state is TASK_PICKED and picked_transition_ok is set to 1. I've never worked with this part of Litmus before, so I don't know what is "normal." Is there supposed to be some sort of deferred safety-net that will schedule the linked task when picked_transition_ok==1 that is not being invoked?</div><div><br></div><div>-Glenn</div><div><br></div></div></div></blockquote><br></div><div><br></div><div>Following up, is there any way that a task could be scheduled while the CPU state remains TASK_PICKED? Thread 1785 calls job_completion() on P0. 1789 is linked as the next task to run on P0 and the state of changes from WILL_SCHEDULE to TASK_PICKED. As I understand it, the state of P0 should soon switch to TASK_SCHEDULED when kernel/sched.c::schedule() calls sched_state_validate_switch(). However, I'm not sure that this is ever being called. As you see in the log below, gsnedf_tick() on P0 shows that 1789 is indeed scheduled, but the state of P0 never switched to TASK_SCHEDULED. Is this the correct behavior? I believe what may be happening is that P0 has an incorrect state; thus, when I try to link a new task to P0, the state transitions from TASK_PICKED to PICKED_WRONG_TASK. Note that 1789 is in a user-space busy-wait loop, so it will never yield the processor-- it must always be preempted.</div><div><br></div><div><div>91576 P0 [gsnedf_schedule@litmus/sched_gsn_edf.c:857]: (aux_threads/1785:1) blocks:0 out_of_time:0 signal_budget: 0 np:0 sleep:1 preempt:0 state:0 sig:0 </div><div><b>91577 P0 [job_completion@litmus/sched_gsn_edf.c:423]: (aux_threads/1785:1) job_completion(). // 1785 calls job_completion() and wants to give up P0.</b></div><div>91578 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:218]: (aux_threads/1785:2) linked = (nil)/0</div><div>91579 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:219]: (aux_threads/1785:2) entry->linked = aux_threads/1785</div><div>91580 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:263]: NULL linked to 0. </div><div>91583 P0 [__add_release@litmus/rt_domain.c:353]: (aux_threads/1785:2) add_release(), rel=62349575697</div><div>91584 P0 [check_for_preemptions@litmus/sched_gsn_edf.c:365]: check_for_preemptions: attempting to link task 1789 to 0</div><div>91585 P0 [gsnedf_get_nearest_available_cpu@litmus/sched_gsn_edf.c:347]: P0 is closest available CPU to P0 </div><div>91586 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:218]: (aux_threads/1785:2) linked = aux_threads/1789</div><div>91587 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:219]: (aux_threads/1785:2) entry->linked = (nil)/0</div><div><b>91588 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:261]: (aux_threads/1789:1) linked to 0. // 1789 linked to P0</b></div><div>91593 P0 [preempt_if_preemptable@litmus/sched_plugin.c:36]: (aux_threads/1785:2) preempt_if_preemptable: aux_threads/1785</div><div>91594 P0 [preempt_if_preemptable@litmus/sched_plugin.c:45]: (aux_threads/1785:2) preempt local cpu.</div><div>91595 P0 [preempt_if_preemptable@litmus/sched_plugin.c:72]: (aux_threads/1785:2) calling litmus_reschedule()</div><div>91596 P0 [litmus_reschedule@litmus/preempt.c:84]: (aux_threads/1785:2) cpu 0: picked_transition_ok = 0 (static)</div><div>91597 P0 [litmus_reschedule@litmus/preempt.c:96]: (aux_threads/1785:2) cpu 0: scheduled_transition_ok = 0 (static)</div><div>91598 P0 [litmus_reschedule@litmus/preempt.c:113]: SCHED_STATE litmus_reschedule picked-ok:0 sched-ok:0</div><div>91600 P0 [gsnedf_schedule@litmus/sched_gsn_edf.c:914]: (aux_threads/1789:1) scheduled_on = P0</div><div>91601 P0 [gsnedf_schedule@litmus/sched_gsn_edf.c:919]: (aux_threads/1785:2) scheduled_on = NO_CPU</div><div><b>91602 P0 [set_sched_state@include/litmus/preempt.h:75]: SCHED_STATE [P0] 0x4 (WILL_SCHEDULE) -> 0x8 (TASK_PICKED) // 1789 picked to run.</b></div><div><b>91603 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1789:1) tick 61850029047 // 1789 is scheduled, but P0 has not transitioned to TASK_SCHEDULED</b></div><div>91604 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1786:1) tick 61850150117</div><div>91605 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1787:1) tick 61850278487</div><div>91606 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1788:1) tick 61850399157</div><div><b>91607 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1789:1) tick 61851025147 // 1789 still scheduled. Still no transition.</b></div><div>91608 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1786:1) tick 61851149077</div><div>91609 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1787:1) tick 61851278397</div><div><br></div></div><div>… lots and lots of ticks …</div><div><br></div><div><div><b>93599 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1789:1) tick 62349024417 // 1789 still ticking on P0</b></div><div>93600 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1786:1) tick 62349148547</div><div>93601 P2 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1787:1) tick 62349278407</div><div>93602 P3 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1788:1) tick 62349398267</div><div><b>93603 P0 [gsnedf_release_jobs@litmus/sched_gsn_edf.c:401]: (aux_threads/1789:1) gsnedf_release_jobs // new higher prio job (1785) released</b></div><div>93606 P0 [check_for_preemptions@litmus/sched_gsn_edf.c:365]: check_for_preemptions: attempting to link task 1785 to 0</div><div>93607 P0 [gsnedf_get_nearest_available_cpu@litmus/sched_gsn_edf.c:347]: Could not find an available CPU close to P0</div><div>93608 P0 [__add_ready@litmus/rt_domain.c:312]: rt: adding aux_threads/1789 (0, 4611686018427387903, 4611686018427387903) [inh_task: (nil)/0 (0, 0 0)] rel=61849803767 to ready queue at 62349602827</div><div>93610 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:218]: (aux_threads/1789:1) linked = aux_threads/1785</div><div><b>93611 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:219]: (aux_threads/1789:1) entry->linked = aux_threads/1789</b></div><div><b>93612 P0 [link_task_to_cpu@litmus/sched_gsn_edf.c:261]: (aux_threads/1785:2) linked to 0.</b></div><div>93616 P0 [preempt_if_preemptable@litmus/sched_plugin.c:36]: (aux_threads/1789:1) preempt_if_preemptable: aux_threads/1789</div><div>93617 P0 [preempt_if_preemptable@litmus/sched_plugin.c:45]: (aux_threads/1789:1) preempt local cpu.</div><div>93618 P0 [preempt_if_preemptable@litmus/sched_plugin.c:72]: (aux_threads/1789:1) calling litmus_reschedule()</div><div><b>93619 P0 [sched_state_transition_on@include/litmus/preempt.h:99]: SCHED_STATE [P0] 0x8 (TASK_PICKED) -> 0x10 (PICKED_WRONG_TASK) // SCHED_STATE of P0 bad? Shouldn't it have been TASK_SCHEDULED?</b></div><div>93620 P0 [litmus_reschedule@litmus/preempt.c:81]: (aux_threads/1789:1) cpu 0: picked_transition_ok = 1</div><div>93621 P0 [litmus_reschedule@litmus/preempt.c:96]: (aux_threads/1789:1) cpu 0: scheduled_transition_ok = 0 (static)</div><div>93622 P0 [litmus_reschedule@litmus/preempt.c:113]: SCHED_STATE litmus_reschedule picked-ok:1 sched-ok:0</div><div>93623 P0 [__edf_higher_prio@litmus/edf_common.c:95]: (aux_threads/1789:1) aux_threads/1789 >> aux_threads/1788 --- 0</div><div><b>93624 P0 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1789:1) tick 62350024927 // 1789 still ticking away. >:(</b></div><div>93625 P1 [gsnedf_tick@litmus/sched_gsn_edf.c:474]: (aux_threads/1786:1) tick 62350149077</div></div><br></div></blockquote></div><br><div><br></div><div>All right. I believe I am getting closer to understanding what is going wrong here. I think I can generalize it enough that others may be able to understand it from a higher level. Perhaps they can shed additional light on the subject.</div><div><br></div><div>Things are breaking because I am exercising Litmus in a new way: A parent thread P forces a child thread C to become real-time. C does not make the request to become real-time itself.</div><div><br></div><div>Here is the sequence of events leading to the corrupted CPU state:</div><div>1) P forks C.</div><div>2) P forces C to become real-time.</div><div>3) P yields its CPU.</div><div>4) C is scheduled, but the CPU state becomes corrupted. The CPU state is TASK_PICKED when it should have transitioned to TASK_SCHEDULED.</div><div>5) P's next job is released, is linked, but never scheduled.</div><div><br></div><div>I believe the problem lies between steps 1 and 2. While the fork has completed with respect to the P, C has not completed all post-fork() processing; specifically, it has not yet called, or is in the middle of executing, schedule_tail().</div><div><br></div><div>Here is schedule_tail():</div><div><br></div><div><div>asmlinkage void schedule_tail(struct task_struct *prev)</div><div> __releases(rq->lock)</div><div>{</div><div> struct rq *rq = this_rq();</div><div><br></div><div> finish_task_switch(rq, prev);</div><div><br></div><div> /* </div><div> * FIXME: do we need to worry about rq being invalidated by the</div><div> * task_switch?</div><div> */</div><div> post_schedule(rq);</div><div><br></div><div> if (sched_state_validate_switch()) {</div><div> TRACE_CUR("cpu %d: CRAP!\n", smp_processor_id());</div><div> } </div><div><br></div><div>#ifdef __ARCH_WANT_UNLOCKED_CTXSW</div><div> /* In this case, finish_task_switch does not reenable preemption */</div><div> preempt_enable();</div><div>#endif</div><div> if (current->set_child_tid)</div><div> put_user(task_pid_vnr(current), current->set_child_tid);</div><div>}</div></div><div><br></div><div><br></div><div>Observe the call sequence: finish_task_switch(), post_schedule(), preempt_enable().</div><div><br></div><div>We see a similar sequence in schedule(): finish_task_switch() (call made within context_switch()), post_schedule(), <b>sched_state_validate_switch()</b>, preempt_enable_no_resched().</div><div><br></div><div><b>sched_state_validate_switch()</b> is a part of the Litmus patch and is what transitions the CPU state from TASK_PICKED to TASK_SCHEDULED. The function is called in schedule(), but not schedule_tail().</div><div><br></div><div>SO: <b>Should we call sched_state_validate_switch() in schedule_tail()?</b> What do we do if sched_state_validate_switch() fails? In schedule(), if sched_state_validate_switch() fails, we perform a retry. Can we silently ignore a failure from sched_state_validate_switch()? Also note that preempt_enable() is ifdef'ed. I don't think it is called on x86 because this code appears to execute preemptively. I get "BUG: using smp_processor_id() in preemptible" errors when I call smp_processor_id() within schedule_tail(). Unfortunately, I believe sched_state_validate_switch() assumes preemption is disabled so it can rule out the possibility of migrations mid-execution. We may have to map the run-queue of of the lock held in schedule_tail() to a CPU and then transition the state of that CPU remotely. Is that safe? Do we need a whole new approach to handle schedule_tail()?</div><div><br></div><div>Thanks,</div><div>Glenn</div><div><br></div></body></html>