[LITMUS^RT] A question about feather-trace tool

Shuai Zhao zs673 at york.ac.uk
Sun Feb 19 21:32:17 CET 2017


Hi Björn

I am a student from the University of York working with Alan and Andy to
study MrsP nested behaviours now.

We now have a full implementation of nested MrsP under Litmus P-FP
scheduler and we are now trying to do some evaluations of the
implementation overheads.

We use the feather-trace tool to trace the overheads of the scheduler
(which includes the P-FP schedule function), context switch (includes
finish_switch function), mrsp_lock and mrsp_unlock function.

During evaluation, we fixed the CPU clock speed, bounds interrupts to cpu 0
and isolate other cpus for testing to minimise the interference from the
system.


However, the result seems werid. Use mrsp_lock as an example: we evaluated
the overhead of the following code using the timestamps "TS_LOCK_START" and
"TS_LOCK_END".

TS_LOCK_START;

if (t->rt_param.task_params.helper == NULL) {
t->rt_param.task_params.priority = sem->prio_per_cpu[get_partition(t)] <
get_priority(t) ? sem->prio_per_cpu[get_partition(t)] : get_priority(t);
t->rt_param.task_params.migrated_time = -1;
}

ticket = atomic_read(&sem->next_ticket);
t->rt_param.task_params.ticket = ticket;
atomic_inc(&sem->next_ticket);

add_task(t, &(sem->tasks_queue->next));
t->rt_param.task_params.requesting_lock = sem;

TS_LOCK_END;

Where function add_task() is as follows:

void add_task(struct task_struct* task, struct list_head *head) {
struct task_list *taskPtr = (struct task_list *) kmalloc(sizeof(struct
task_list), GFP_KERNEL);
BUG_ON(taskPtr == NULL);

taskPtr->task = task;
INIT_LIST_HEAD(&taskPtr->next);
list_add_tail(&taskPtr->next, head);
}

We expect the overheads of the code above to be stable as the time
complexity is order 1. However, the testing result gives us a different
story, as shown below:

Overhead        Overhead        Unit        Samples        MAX
 99.9th perc.        99perc.        95th perc.        avg        med
 min        std        var
MRSP             LOCK             cycles    149985          20089
2220.016            1584            1287            853.9508    939
 319     367.314  134918.7

As we can see, the max overheads we have is 20089 cycles, which is far
bigger than the med/avg value and even the 99.9the perc value.

I am confused about this result. I wonder have you meet this situation
before? Is there any explanation for the result like this? or is there any
way to avoid this?

Thank you in advance.

Best wishes
Shuai
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20170219/d6859058/attachment.html>


More information about the litmus-dev mailing list