[LITMUS^RT] A question about feather-trace tool

Meng Xu xumengpanda at gmail.com
Sun Feb 19 22:27:26 CET 2017


On Sun, Feb 19, 2017 at 3:32 PM, Shuai Zhao <zs673 at york.ac.uk> wrote:
>
> Hi Björn


Hi,

Can I hijack the question? ;-)

>
>
> I am a student from the University of York working with Alan and Andy to study MrsP nested behaviours now.
>
> We now have a full implementation of nested MrsP under Litmus P-FP scheduler and we are now trying to do some evaluations of the implementation overheads.
>
> We use the feather-trace tool to trace the overheads of the scheduler (which includes the P-FP schedule function), context switch (includes finish_switch function), mrsp_lock and mrsp_unlock function.
>
> During evaluation, we fixed the CPU clock speed, bounds interrupts to cpu 0 and isolate other cpus for testing to minimise the interference from the system.


Did you disable the hardware prefetching mechanisms in BIOS?
Maybe you want to disable them as well.


>
>
>
>
>
> However, the result seems werid. Use mrsp_lock as an example: we evaluated the overhead of the following code using the timestamps "TS_LOCK_START" and "TS_LOCK_END".
>
> TS_LOCK_START;
>
> if (t->rt_param.task_params.helper == NULL) {
> t->rt_param.task_params.priority = sem->prio_per_cpu[get_partition(t)] < get_priority(t) ? sem->prio_per_cpu[get_partition(t)] : get_priority(t);
> t->rt_param.task_params.migrated_time = -1;
> }
>
> ticket = atomic_read(&sem->next_ticket);
> t->rt_param.task_params.ticket = ticket;
> atomic_inc(&sem->next_ticket);
>
> add_task(t, &(sem->tasks_queue->next));
> t->rt_param.task_params.requesting_lock = sem;
>
> TS_LOCK_END;
>
> Where function add_task() is as follows:
>
> void add_task(struct task_struct* task, struct list_head *head) {
> struct task_list *taskPtr = (struct task_list *) kmalloc(sizeof(struct task_list),
>
> GFP_KERNEL);


I guess the variance comes from the kmalloc().
kmalloc() latency varies a lot.

You should avoid kmalloc() in the critical section.
One approach is pre-allocating the space, re-initializing the value
every time when you want to use it.

You can also measure the overhead value spent in this function to
validate my speculation.


>
> BUG_ON(taskPtr == NULL);
>
> taskPtr->task = task;
> INIT_LIST_HEAD(&taskPtr->next);
> list_add_tail(&taskPtr->next, head);
> }
>
> We expect the overheads of the code above to be stable as the time complexity is order 1. However, the testing result gives us a different story, as shown below:
>
> Overhead        Overhead        Unit        Samples        MAX        99.9th perc.        99perc.        95th perc.        avg        med        min        std        var
> MRSP             LOCK             cycles    149985          20089       2220.016            1584            1287            853.9508    939        319     367.314  134918.7


This is a bit difficulty to read. Next time when you send the new
result, could you please send it in column. ;-)


>
>
> As we can see, the max overheads we have is 20089 cycles, which is far bigger than the med/avg value and even the 99.9the perc value.
>
> I am confused about this result. I wonder have you meet this situation before? Is there any explanation for the result like this? or is there any way to avoid this?


You should be able to avoid this, as I mentioned above.

Best,

Meng


-- 
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/



More information about the litmus-dev mailing list