<div dir="ltr">Hi <span class="" id=":wr.1" tabindex="-1">Meng</span><div><br></div><div>Thank you for your fast respond.</div><div><br></div><div>Yes, the CPU <span class="" id=":wr.2" tabindex="-1">prefetch</span> is already disabled. But there isn't any other options of the hardware <span class="" id=":wr.3" tabindex="-1">prefetching</span>. But I guess its should be OK.</div><div><br></div><div>The <span class="" id=":wr.4" tabindex="-1">kmalloc</span>() function can be one of the reasons. I will adjust the code. </div><div><br></div><div>BTW, is there any other features or facilities that could be disabled to minimise the system interferences?</div><div><br></div><div>Best wishes</div><div><span class="" id=":wr.5" tabindex="-1">Shuai</span></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 19 February 2017 at 21:27, <span dir="ltr"><<a href="mailto:litmus-dev-request@lists.litmus-rt.org" target="_blank">litmus-dev-request@lists.litmus-rt.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send litmus-dev mailing list submissions to<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/<wbr>listinfo/litmus-dev</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:litmus-dev-request@lists.litmus-rt.org">litmus-dev-request@lists.<wbr>litmus-rt.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:litmus-dev-owner@lists.litmus-rt.org">litmus-dev-owner@lists.litmus-<wbr>rt.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of litmus-dev digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. A question about feather-trace tool (Shuai Zhao)<br>
2. Re: A question about feather-trace tool (Shuai Zhao)<br>
3. Re: A question about feather-trace tool (Meng Xu)<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>----------<br>
<br>
Message: 1<br>
Date: Sun, 19 Feb 2017 20:32:17 +0000<br>
From: Shuai Zhao <<a href="mailto:zs673@york.ac.uk">zs673@york.ac.uk</a>><br>
To: <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
Subject: [LITMUS^RT] A question about feather-trace tool<br>
Message-ID:<br>
<<a href="mailto:CAA133hO%2B_4Svyx6dJUyUn_rsnY50AfUVxLyOHKcsUN6%2BvG8UiQ@mail.gmail.com">CAA133hO+_4Svyx6dJUyUn_<wbr>rsnY50AfUVxLyOHKcsUN6+vG8UiQ@<wbr>mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Björn<br>
<br>
I am a student from the University of York working with Alan and Andy to<br>
study MrsP nested behaviours now.<br>
<br>
We now have a full implementation of nested MrsP under Litmus P-FP<br>
scheduler and we are now trying to do some evaluations of the<br>
implementation overheads.<br>
<br>
We use the feather-trace tool to trace the overheads of the scheduler<br>
(which includes the P-FP schedule function), context switch (includes<br>
finish_switch function), mrsp_lock and mrsp_unlock function.<br>
<br>
During evaluation, we fixed the CPU clock speed, bounds interrupts to cpu 0<br>
and isolate other cpus for testing to minimise the interference from the<br>
system.<br>
<br>
<br>
However, the result seems werid. Use mrsp_lock as an example: we evaluated<br>
the overhead of the following code using the timestamps "TS_LOCK_START" and<br>
"TS_LOCK_END".<br>
<br>
TS_LOCK_START;<br>
<br>
if (t->rt_param.task_params.<wbr>helper == NULL) {<br>
t->rt_param.task_params.<wbr>priority = sem->prio_per_cpu[get_<wbr>partition(t)] <<br>
get_priority(t) ? sem->prio_per_cpu[get_<wbr>partition(t)] : get_priority(t);<br>
t->rt_param.task_params.<wbr>migrated_time = -1;<br>
}<br>
<br>
ticket = atomic_read(&sem->next_ticket)<wbr>;<br>
t->rt_param.task_params.ticket = ticket;<br>
atomic_inc(&sem->next_ticket);<br>
<br>
add_task(t, &(sem->tasks_queue->next));<br>
t->rt_param.task_params.<wbr>requesting_lock = sem;<br>
<br>
TS_LOCK_END;<br>
<br>
Where function add_task() is as follows:<br>
<br>
void add_task(struct task_struct* task, struct list_head *head) {<br>
struct task_list *taskPtr = (struct task_list *) kmalloc(sizeof(struct<br>
task_list), GFP_KERNEL);<br>
BUG_ON(taskPtr == NULL);<br>
<br>
taskPtr->task = task;<br>
INIT_LIST_HEAD(&taskPtr->next)<wbr>;<br>
list_add_tail(&taskPtr->next, head);<br>
}<br>
<br>
We expect the overheads of the code above to be stable as the time<br>
complexity is order 1. However, the testing result gives us a different<br>
story, as shown below:<br>
<br>
Overhead Overhead Unit Samples MAX<br>
99.9th perc. 99perc. 95th perc. avg med<br>
min std var<br>
MRSP LOCK cycles 149985 20089<br>
2220.016 1584 1287 853.9508 939<br>
319 367.314 134918.7<br>
<br>
As we can see, the max overheads we have is 20089 cycles, which is far<br>
bigger than the med/avg value and even the 99.9the perc value.<br>
<br>
I am confused about this result. I wonder have you meet this situation<br>
before? Is there any explanation for the result like this? or is there any<br>
way to avoid this?<br>
<br>
Thank you in advance.<br>
<br>
Best wishes<br>
Shuai<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20170219/d6859058/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.litmus-rt.org/<wbr>pipermail/litmus-dev/<wbr>attachments/20170219/d6859058/<wbr>attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Sun, 19 Feb 2017 20:37:52 +0000<br>
From: Shuai Zhao <<a href="mailto:zs673@york.ac.uk">zs673@york.ac.uk</a>><br>
To: <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
Subject: Re: [LITMUS^RT] A question about feather-trace tool<br>
Message-ID:<br>
<<a href="mailto:CAA133hM4boX5fLX1%2BPERwbxtvymOFOonn0wwV2r3q9s3hehBog@mail.gmail.com">CAA133hM4boX5fLX1+<wbr>PERwbxtvymOFOonn0wwV2r3q9s3heh<wbr>Bog@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
PS: I commented out the original "TS_LOCK_START" and "TS_LOCK_END" stamps<br>
in sys_lock function. thanks.<br>
<br>
On 19 February 2017 at 20:32, Shuai Zhao <<a href="mailto:zs673@york.ac.uk">zs673@york.ac.uk</a>> wrote:<br>
<br>
> Hi Björn<br>
><br>
> I am a student from the University of York working with Alan and Andy to<br>
> study MrsP nested behaviours now.<br>
><br>
> We now have a full implementation of nested MrsP under Litmus P-FP<br>
> scheduler and we are now trying to do some evaluations of the<br>
> implementation overheads.<br>
><br>
> We use the feather-trace tool to trace the overheads of the scheduler<br>
> (which includes the P-FP schedule function), context switch (includes<br>
> finish_switch function), mrsp_lock and mrsp_unlock function.<br>
><br>
> During evaluation, we fixed the CPU clock speed, bounds interrupts to cpu<br>
> 0 and isolate other cpus for testing to minimise the interference from the<br>
> system.<br>
><br>
><br>
> However, the result seems werid. Use mrsp_lock as an example: we evaluated<br>
> the overhead of the following code using the timestamps "TS_LOCK_START" and<br>
> "TS_LOCK_END".<br>
><br>
> TS_LOCK_START;<br>
><br>
> if (t->rt_param.task_params.<wbr>helper == NULL) {<br>
> t->rt_param.task_params.<wbr>priority = sem->prio_per_cpu[get_<wbr>partition(t)] <<br>
> get_priority(t) ? sem->prio_per_cpu[get_<wbr>partition(t)] : get_priority(t);<br>
> t->rt_param.task_params.<wbr>migrated_time = -1;<br>
> }<br>
><br>
> ticket = atomic_read(&sem->next_ticket)<wbr>;<br>
> t->rt_param.task_params.ticket = ticket;<br>
> atomic_inc(&sem->next_ticket);<br>
><br>
> add_task(t, &(sem->tasks_queue->next));<br>
> t->rt_param.task_params.<wbr>requesting_lock = sem;<br>
><br>
> TS_LOCK_END;<br>
><br>
> Where function add_task() is as follows:<br>
><br>
> void add_task(struct task_struct* task, struct list_head *head) {<br>
> struct task_list *taskPtr = (struct task_list *) kmalloc(sizeof(struct<br>
> task_list), GFP_KERNEL);<br>
> BUG_ON(taskPtr == NULL);<br>
><br>
> taskPtr->task = task;<br>
> INIT_LIST_HEAD(&taskPtr->next)<wbr>;<br>
> list_add_tail(&taskPtr->next, head);<br>
> }<br>
><br>
> We expect the overheads of the code above to be stable as the time<br>
> complexity is order 1. However, the testing result gives us a different<br>
> story, as shown below:<br>
><br>
> Overhead Overhead Unit Samples MAX<br>
> 99.9th perc. 99perc. 95th perc. avg med<br>
> min std var<br>
> MRSP LOCK cycles 149985 20089<br>
> 2220.016 1584 1287 853.9508 939<br>
> 319 367.314 134918.7<br>
><br>
> As we can see, the max overheads we have is 20089 cycles, which is far<br>
> bigger than the med/avg value and even the 99.9the perc value.<br>
><br>
> I am confused about this result. I wonder have you meet this situation<br>
> before? Is there any explanation for the result like this? or is there any<br>
> way to avoid this?<br>
><br>
> Thank you in advance.<br>
><br>
> Best wishes<br>
> Shuai<br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20170219/824d81ab/attachment-0001.html" rel="noreferrer" target="_blank">http://lists.litmus-rt.org/<wbr>pipermail/litmus-dev/<wbr>attachments/20170219/824d81ab/<wbr>attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Sun, 19 Feb 2017 16:27:26 -0500<br>
From: Meng Xu <<a href="mailto:xumengpanda@gmail.com">xumengpanda@gmail.com</a>><br>
To: <a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
Subject: Re: [LITMUS^RT] A question about feather-trace tool<br>
Message-ID:<br>
<CAENZ-+mfQQ+-<wbr>5qxbJOyh2UEPFrhvPp2OzsB4dYXUG_<wbr>r=<a href="mailto:tNj95A@mail.gmail.com">tNj95A@mail.gmail.com</a>><br>
Content-Type: text/plain; charset=UTF-8<br>
<br>
On Sun, Feb 19, 2017 at 3:32 PM, Shuai Zhao <<a href="mailto:zs673@york.ac.uk">zs673@york.ac.uk</a>> wrote:<br>
><br>
> Hi Björn<br>
<br>
<br>
Hi,<br>
<br>
Can I hijack the question? ;-)<br>
<br>
><br>
><br>
> I am a student from the University of York working with Alan and Andy to study MrsP nested behaviours now.<br>
><br>
> We now have a full implementation of nested MrsP under Litmus P-FP scheduler and we are now trying to do some evaluations of the implementation overheads.<br>
><br>
> We use the feather-trace tool to trace the overheads of the scheduler (which includes the P-FP schedule function), context switch (includes finish_switch function), mrsp_lock and mrsp_unlock function.<br>
><br>
> During evaluation, we fixed the CPU clock speed, bounds interrupts to cpu 0 and isolate other cpus for testing to minimise the interference from the system.<br>
<br>
<br>
Did you disable the hardware prefetching mechanisms in BIOS?<br>
Maybe you want to disable them as well.<br>
<br>
<br>
><br>
><br>
><br>
><br>
><br>
> However, the result seems werid. Use mrsp_lock as an example: we evaluated the overhead of the following code using the timestamps "TS_LOCK_START" and "TS_LOCK_END".<br>
><br>
> TS_LOCK_START;<br>
><br>
> if (t->rt_param.task_params.<wbr>helper == NULL) {<br>
> t->rt_param.task_params.<wbr>priority = sem->prio_per_cpu[get_<wbr>partition(t)] < get_priority(t) ? sem->prio_per_cpu[get_<wbr>partition(t)] : get_priority(t);<br>
> t->rt_param.task_params.<wbr>migrated_time = -1;<br>
> }<br>
><br>
> ticket = atomic_read(&sem->next_ticket)<wbr>;<br>
> t->rt_param.task_params.ticket = ticket;<br>
> atomic_inc(&sem->next_ticket);<br>
><br>
> add_task(t, &(sem->tasks_queue->next));<br>
> t->rt_param.task_params.<wbr>requesting_lock = sem;<br>
><br>
> TS_LOCK_END;<br>
><br>
> Where function add_task() is as follows:<br>
><br>
> void add_task(struct task_struct* task, struct list_head *head) {<br>
> struct task_list *taskPtr = (struct task_list *) kmalloc(sizeof(struct task_list),<br>
><br>
> GFP_KERNEL);<br>
<br>
<br>
I guess the variance comes from the kmalloc().<br>
kmalloc() latency varies a lot.<br>
<br>
You should avoid kmalloc() in the critical section.<br>
One approach is pre-allocating the space, re-initializing the value<br>
every time when you want to use it.<br>
<br>
You can also measure the overhead value spent in this function to<br>
validate my speculation.<br>
<br>
<br>
><br>
> BUG_ON(taskPtr == NULL);<br>
><br>
> taskPtr->task = task;<br>
> INIT_LIST_HEAD(&taskPtr->next)<wbr>;<br>
> list_add_tail(&taskPtr->next, head);<br>
> }<br>
><br>
> We expect the overheads of the code above to be stable as the time complexity is order 1. However, the testing result gives us a different story, as shown below:<br>
><br>
> Overhead Overhead Unit Samples MAX 99.9th perc. 99perc. 95th perc. avg med min std var<br>
> MRSP LOCK cycles 149985 20089 2220.016 1584 1287 853.9508 939 319 367.314 134918.7<br>
<br>
<br>
This is a bit difficulty to read. Next time when you send the new<br>
result, could you please send it in column. ;-)<br>
<br>
<br>
><br>
><br>
> As we can see, the max overheads we have is 20089 cycles, which is far bigger than the med/avg value and even the 99.9the perc value.<br>
><br>
> I am confused about this result. I wonder have you meet this situation before? Is there any explanation for the result like this? or is there any way to avoid this?<br>
<br>
<br>
You should be able to avoid this, as I mentioned above.<br>
<br>
Best,<br>
<br>
Meng<br>
<br>
<br>
--<br>
-----------<br>
Meng Xu<br>
PhD Student in Computer and Information Science<br>
University of Pennsylvania<br>
<a href="http://www.cis.upenn.edu/~mengxu/" rel="noreferrer" target="_blank">http://www.cis.upenn.edu/~<wbr>mengxu/</a><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
______________________________<wbr>_________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/<wbr>listinfo/litmus-dev</a><br>
<br>
<br>
------------------------------<br>
<br>
End of litmus-dev Digest, Vol 60, Issue 2<br>
******************************<wbr>***********<br>
</blockquote></div><br></div>