<div dir="ltr">PS: The timestamp LOCK_SUSPEND is used to measure the prue overheads of pfp-scheduler (the code between the run queue locks). The scheduler name is MRSP but it is hard coded... it is actually pfp-scheduler running. Sorry for this. </div><div class="gmail_extra"><br><div class="gmail_quote">On 23 February 2017 at 02:17, Shuai Zhao <span dir="ltr"><<a href="mailto:zs673@york.ac.uk" target="_blank">zs673@york.ac.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi <span style="font-size:12.8px">Björn</span><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Sorry for the picture. </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Here I attached the testing results for the original pfp-scheduler. We use a Quad-Core AMD 8350 Processor 2GHz with a three level cache architecture (512kb, 2048kb, 2mb). </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">We have not done anything to prevent the cache interference... However, even if so, it should give a relatively stable max value for each identical test with a sufficient number of samples (i.e. executing the longest code path in pfp-scheduler), am I correct? </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">I am running a "meaningless" test with a increasing number of processor, from 1 to 16, and expect a relatively stable max overheads for each test (just to have a view of how the system can interfer the scheduler execution). On each processor there are 5 tasks that keep releasing and executing.</span></div><div><br></div><div><span style="font-size:12.8px">Thank you for your help.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Best wishes</span></div><span class="HOEnZb"><font color="#888888"><div><span style="font-size:12.8px">Shuai</span></div><div><span style="font-size:12.8px"> </span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"> </span></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 22 February 2017 at 11:00, <span dir="ltr"><<a href="mailto:litmus-dev-request@lists.litmus-rt.org" target="_blank">litmus-dev-request@lists.<wbr>litmus-rt.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send litmus-dev mailing list submissions to<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/li<wbr>stinfo/litmus-dev</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:litmus-dev-request@lists.litmus-rt.org" target="_blank">litmus-dev-request@lists.litmu<wbr>s-rt.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:litmus-dev-owner@lists.litmus-rt.org" target="_blank">litmus-dev-owner@lists.litmus-<wbr>rt.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of litmus-dev digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: litmus-dev Digest, Vol 60, Issue 4 (Björn Brandenburg)<br>
<br>
<br>
------------------------------<wbr>------------------------------<wbr>----------<br>
<br>
Message: 1<br>
Date: Tue, 21 Feb 2017 16:55:09 +0100<br>
From: Björn Brandenburg <<a href="mailto:bbb@mpi-sws.org" target="_blank">bbb@mpi-sws.org</a>><br>
To: <a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
Subject: Re: [LITMUS^RT] litmus-dev Digest, Vol 60, Issue 4<br>
Message-ID: <<a href="mailto:DEF99AB7-E5D5-4D18-8ABB-8E22E9115A57@mpi-sws.org" target="_blank">DEF99AB7-E5D5-4D18-8ABB-8E22E<wbr>9115A57@mpi-sws.org</a>><br>
Content-Type: text/plain; charset=utf-8<br>
<br>
<br>
> On 21 Feb 2017, at 15:20, Shuai Zhao <<a href="mailto:zs673@york.ac.uk" target="_blank">zs673@york.ac.uk</a>> wrote:<br>
><br>
> The code I posted in the email is protected by a spin lock to avoid race conditions. The tickets are maintained and obtained by atomic_t variables.<br>
<br>
Ok, then the code is somewhat misleading. A regular unsigned long would do.<br>
<br>
><br>
> Using the feather-trace tool we get a csv file, where all the overheads are recorded. I noticed that you processed the data as max, 99.9prec, 99prec and 95prec. I wonder what is the rational behind this?<br>
<br>
These are cutoffs that have been frequently used in prior work, so it’s interesting to see what the data looks like at these commonly used points.<br>
<br>
><br>
> Is that the 99,9prec or 99prec result filter out some of the out-layer data which is influenced by the system overheads or interrupts?<br>
<br>
Well, they cut off a part of the distribution. Whether you may reasonably consider these parts to be “outliers” depends on your hardware, experimental setup, and goal of the experiments. It’s a judgement call.<br>
<br>
><br>
> For example: we tried to gather the overheads of the original pfp-scheduler. we did this experiment with a increasing number of processors and expect a constant overheads. However, the data we have is confusing. The samples for each test is above one million.<br>
><br>
> <Screen Shot 2017-02-21 at 13.23.45.png><br>
<br>
Could you please provide numeric data as inline text or as a CSV file? A picture is not so helpful here…<br>
<br>
><br>
> We gather this data inside the pfp-scheduler (we put time stamps inside the run queue locks) to get the exact overheads for executing the scheduling code. The result above gives the max overhead we observed in each test.<br>
><br>
> As shown in the result, the overheads of the pfp-scheduler is extremely high when using cpu 1,2 and 4. By repeating the same tests, we can often observe such a extreme value, but with different number of cpus.<br>
<br>
I don’t understand your question. You are *measuring* a long-tail distribution. Of course you are going to see rare values stemming from the long tail only, well, rarely.<br>
<br>
If you see heavily fluctuating “observed max” values from run to run, then you are likely not using enough samples.<br>
<br>
><br>
> A more simple example: the lock function that I post on previous email with the kmalloc removed. This part of code has only one path and is O(1), shich suppose to have a stable overhead.<br>
> <Screen Shot 2017-02-21 at 13.31.46.png><br>
> However, as shown above, the overheads of the code is extremely high with 2,14 and 15 cpus.<br>
><br>
> I wonder have you met any sitations like this before? and how you explain such a result or how you solve the problem(if there is).<br>
><br>
> Do you have any suggestions when facing such a confusing result?<br>
<br>
What hardware platform are you using?<br>
<br>
Are you doing anything to prevent cross-core cache interference?<br>
<br>
- Björn<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
______________________________<wbr>_________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/li<wbr>stinfo/litmus-dev</a><br>
<br>
<br>
------------------------------<br>
<br>
End of litmus-dev Digest, Vol 60, Issue 10<br>
******************************<wbr>************<br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>