[LITMUS^RT] [LITMUS-RT] Modified pfp_schedule for migration

Björn Brandenburg bbb at mpi-sws.org
Sun Jan 12 10:17:06 CET 2014


On 10 Jan 2014, at 19:29, Sebastiano Catellani <sebastiano.catellani at gmail.com> wrote:

> I'm modifying the pfp_schedule function of the sched_pfp.c file. The behavior that I want to obtain is to migrate a task from a cpu to another everytime that it is preempted by a task with higher priority.
> 
> With the following task set, I'd like that p2 migrates to the second cpu when it is preempted by p1 and migrates back to the first cpu when preempted by p3.
> 
> -p 1 -z 1 -q 3 5 20 //(p1)
> -p 1 -z 1 -q 4 5 18 //(p2)
> -p 2 -z 1 -q 3 3 20 //(p3)


Dear Sebastiano,

the behavior that you describe sounds like a global scheduler. P-FP is a partitioned scheduler and not a good starting point to develop a global scheduler. If indeed you want to develop a global-like scheduler, please have a look at GSN-EDF instead.

> This is the error I get when I execute the run_exps.py script:
> 
> =============================================
> [ INFO: possible recursive locking detected ]
> 3.10.5-litmus2013.1 #101 Not tainted
> ---------------------------------------------
> rtspin/1586 is trying to acquire lock:
>  (&rt->ready_lock){......}, at: [<ffffffff81299940>] pfp_migrate_to+0x80/0x120
> 
> but task is already holding lock:
>  (&rt->ready_lock){......}, at: [<ffffffff8129ba35>] pfp_schedule+0x35/0xab0
> 
> other info that might help us debug this:
>  Possible unsafe locking scenario:
> 
>        CPU0
>        ----
>   lock(&rt->ready_lock);
>   lock(&rt->ready_lock);
> 
>  *** DEADLOCK ***

Locking in a kernel such as Linux (and hence LITMUS^RT) is not transparent, meaning you need to understand which locks a function that you call may acquire. You can't just call a function that acquires locks from any context. Rather, you need to look at the code to understand when it is safe to call it.

In your specific case, pfp_migrate_to() acquires per-CPU state locks. This means you can't call it while holding a per-CPU state lock (such as within the schedule() callback) unless you somehow make sure that deadlock is impossible. Linux's lockdep debugging aid tells you exactly this.

Best regards,
Björn





More information about the litmus-dev mailing list