[LITMUS^RT] litmus-dev Digest, Vol 60, Issue 4
Björn Brandenburg
bbb at mpi-sws.org
Tue Feb 21 16:55:09 CET 2017
> On 21 Feb 2017, at 15:20, Shuai Zhao <zs673 at york.ac.uk> wrote:
>
> The code I posted in the email is protected by a spin lock to avoid race conditions. The tickets are maintained and obtained by atomic_t variables.
Ok, then the code is somewhat misleading. A regular unsigned long would do.
>
> Using the feather-trace tool we get a csv file, where all the overheads are recorded. I noticed that you processed the data as max, 99.9prec, 99prec and 95prec. I wonder what is the rational behind this?
These are cutoffs that have been frequently used in prior work, so it’s interesting to see what the data looks like at these commonly used points.
>
> Is that the 99,9prec or 99prec result filter out some of the out-layer data which is influenced by the system overheads or interrupts?
Well, they cut off a part of the distribution. Whether you may reasonably consider these parts to be “outliers” depends on your hardware, experimental setup, and goal of the experiments. It’s a judgement call.
>
> For example: we tried to gather the overheads of the original pfp-scheduler. we did this experiment with a increasing number of processors and expect a constant overheads. However, the data we have is confusing. The samples for each test is above one million.
>
> <Screen Shot 2017-02-21 at 13.23.45.png>
Could you please provide numeric data as inline text or as a CSV file? A picture is not so helpful here…
>
> We gather this data inside the pfp-scheduler (we put time stamps inside the run queue locks) to get the exact overheads for executing the scheduling code. The result above gives the max overhead we observed in each test.
>
> As shown in the result, the overheads of the pfp-scheduler is extremely high when using cpu 1,2 and 4. By repeating the same tests, we can often observe such a extreme value, but with different number of cpus.
I don’t understand your question. You are *measuring* a long-tail distribution. Of course you are going to see rare values stemming from the long tail only, well, rarely.
If you see heavily fluctuating “observed max” values from run to run, then you are likely not using enough samples.
>
> A more simple example: the lock function that I post on previous email with the kmalloc removed. This part of code has only one path and is O(1), shich suppose to have a stable overhead.
> <Screen Shot 2017-02-21 at 13.31.46.png>
> However, as shown above, the overheads of the code is extremely high with 2,14 and 15 cpus.
>
> I wonder have you met any sitations like this before? and how you explain such a result or how you solve the problem(if there is).
>
> Do you have any suggestions when facing such a confusing result?
What hardware platform are you using?
Are you doing anything to prevent cross-core cache interference?
- Björn
More information about the litmus-dev
mailing list