<div dir="ltr">Hi Glenn<div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Sep 27, 2013 at 8:23 PM, Glenn Elliott <span dir="ltr"><<a href="mailto:gelliott@cs.unc.edu" target="_blank">gelliott@cs.unc.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5"><br>
On Sep 27, 2013, at 7:17 PM, Andrea Bastoni <<a href="mailto:bastoni@sprg.uniroma2.it">bastoni@sprg.uniroma2.it</a>> wrote:<br>
<br>
> Hi Glenn,<br>
><br>
> On 09/27/2013 10:46 PM, Glenn Elliott wrote:<br>
>> I've been getting more acquainted with the latest litmus code and I see that<br>
>> GSN-EDF/C-EDF prefer to schedule a task on the local CPU if the CPU is<br>
>> available. This avoids an IPI. However, this may result in higher cache<br>
>> migration costs because this scheduling decision is made before checking CPU<br>
>> affinity.<br>
><br>
> Umm, what is the use case you've in mind? Evaluating a local condition is<br>
> relatively cheap (as you've already all the locks you need). Preempting a remote<br>
> CPU is more expensive (locks to be reacquired, interrupt to be sent and<br>
> received, etc.).<br>
><br>
>> Are IPI costs (significantly) higher than cache affinity loss?<br>
><br>
> What's your working set size, what's the load of the system, what's your<br>
> architecture?<br>
><br>
> Also, an IPI will cause an interrupt on the destination CPU, thus disturbing the<br>
> execution of the task already running there. This perturbation may be completely<br>
> pointless in the case the task is eventually not scheduled on the CPU being<br>
> preempted by the IPI.<br>
><br>
>> Are we making<br>
>> the assumption that affinity has already been lost due to cache polluters?<br>
><br>
> Are you assuming worst-case conditions?<br>
><br>
> Thanks,<br>
> - Andrea<br>
><br>
>> Do we have empirical data to support the current implementation? I'm not<br>
>> arguing in favor of one method or the other---I am just interested in the<br>
>> motivations behind the changes.<br>
>><br>
>> Thanks,<br>
>> Glenn<br>
>> _______________________________________________<br>
>> litmus-dev mailing list<br>
>> <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
>> <a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
><br>
> --<br>
> Andrea Bastoni, PhD <<a href="mailto:bastoni@sprg.uniroma2.it">bastoni@sprg.uniroma2.it</a>><br>
> Dept. of Computer Science, Systems, and Industrial Engineering<br>
> University of Rome "Tor Vergata", Via del Politecnico, 1 - 00133 Rome<br>
><br>
> _______________________________________________<br>
> litmus-dev mailing list<br>
> <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
> <a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
<br>
<br>
</div></div>Hi Andrea,<br>
<br>
Thank you for the deeper explanation. I certainly appreciate the simplicity of local scheduling. I will be investigating response times of cache heavy tasks under G-EDF/C-EDF over the next few weeks. I'll report back if I see any noticeable trade-offs between local-first vs. affinity-first scheduling.<br>
<span class=""><font color="#888888"><br>
-Glenn<br>
</font></span><div class=""><div class="h5">_______________________________________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
</div></div></blockquote></div><br>I don't have much experience with the behavior of Litmus RT, but, I've
saw some cases of high latency in the PREEMPT RT, on the real time tasks
that the scheduler chose to migrate to an idle processor before wake it
up.<br><br>This issue happens on systems that use the mwait_idle for
the cpu_idle. The mwait_idle is the idle function for processors with
the X86_FEATURE_MWAIT. <br><br>In this cases, when the processor is on
idle state (this processors are waked up by the pm, not by the
schedule_ipi), sometimes, it presents a high latency to wake up.<br><br>In
the majority of times, the latency between the sched_wakeup tracepoint
and the context_switch tracepoint is under 40 us, but sometimes, this
latency reach around 120 us, considering only the case where the idle
task do not block between wake up and leave the processor.<br><br>I know
that it, much probably, happened because of the pm_idle, but it happens
very often. In the system that I saw this, the latency is lower when
all the processors are busy and FIFO scheduler choose to run the task in
the same processor.<br><br>I do not spent many time trying to
understand this, it is not the main objective of my work, but it shows a
case that, if you know that the current processor are ready to the
context switch, like Andrea said, with the desired locks etc, is better
to choose wake up the task in the same processor.<br clear="all"><br>-- <br>Daniel Bristot de Oliveira
</div></div>