[LITMUS^RT] [LITMUS-RT] Modified pfp_schedule for migration

Sebastiano Catellani zebganzo at gmail.com
Mon Jan 13 20:10:39 CET 2014


First of all thanks for the answer, I know that It isn't correct to make a
task migrate from a cpu to another in a partitioned system, but It's a
prototype for a more complex scheduler that allows a task to migrate only
when particular conditions are met.

I understand my mistake, I will look under the hood hoping to find out how
to avoid deadlock by calling the migrate_to function.

Best regards,
Sebastiano



2014/1/12 Björn Brandenburg <bbb at mpi-sws.org>

>
> On 10 Jan 2014, at 19:29, Sebastiano Catellani <
> sebastiano.catellani at gmail.com> wrote:
>
> > I'm modifying the pfp_schedule function of the sched_pfp.c file. The
> behavior that I want to obtain is to migrate a task from a cpu to another
> everytime that it is preempted by a task with higher priority.
> >
> > With the following task set, I'd like that p2 migrates to the second cpu
> when it is preempted by p1 and migrates back to the first cpu when
> preempted by p3.
> >
> > -p 1 -z 1 -q 3 5 20 //(p1)
> > -p 1 -z 1 -q 4 5 18 //(p2)
> > -p 2 -z 1 -q 3 3 20 //(p3)
>
>
> Dear Sebastiano,
>
> the behavior that you describe sounds like a global scheduler. P-FP is a
> partitioned scheduler and not a good starting point to develop a global
> scheduler. If indeed you want to develop a global-like scheduler, please
> have a look at GSN-EDF instead.
>
> > This is the error I get when I execute the run_exps.py script:
> >
> > =============================================
> > [ INFO: possible recursive locking detected ]
> > 3.10.5-litmus2013.1 #101 Not tainted
> > ---------------------------------------------
> > rtspin/1586 is trying to acquire lock:
> >  (&rt->ready_lock){......}, at: [<ffffffff81299940>]
> pfp_migrate_to+0x80/0x120
> >
> > but task is already holding lock:
> >  (&rt->ready_lock){......}, at: [<ffffffff8129ba35>]
> pfp_schedule+0x35/0xab0
> >
> > other info that might help us debug this:
> >  Possible unsafe locking scenario:
> >
> >        CPU0
> >        ----
> >   lock(&rt->ready_lock);
> >   lock(&rt->ready_lock);
> >
> >  *** DEADLOCK ***
>
> Locking in a kernel such as Linux (and hence LITMUS^RT) is not
> transparent, meaning you need to understand which locks a function that you
> call may acquire. You can't just call a function that acquires locks from
> any context. Rather, you need to look at the code to understand when it is
> safe to call it.
>
> In your specific case, pfp_migrate_to() acquires per-CPU state locks. This
> means you can't call it while holding a per-CPU state lock (such as within
> the schedule() callback) unless you somehow make sure that deadlock is
> impossible. Linux's lockdep debugging aid tells you exactly this.
>
> Best regards,
> Björn
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20140113/0129f0bf/attachment.html>


More information about the litmus-dev mailing list