[LITMUS^RT] Trace unit ?
Glenn Elliott
gelliott at cs.unc.edu
Thu Jun 19 13:43:09 CEST 2014
On Jun 19, 2014, at 7:09 AM, Sanjib Das <cnt.sanjib at googlemail.com> wrote:
> Hi,
> Thank you very much. But, For each experiment with PFAIR scrip returns 3 couples of CSVs and PDFs. Where PDFs are labeled as CXS_task_Avg_Avg, CXS_task_Min_Min, CXS_task_Min_Avg, CXS_task_Var_Avg, CXS_task_Max_Max, CXS_task_Max_Avg, and there is not label on the generated graph .
>
> Can you please help me in this particulate case ?
>
> Thanks in advance
> Sanjib
>
>
> On Thu, Jun 19, 2014 at 1:05 PM, Björn Brandenburg <bbb at mpi-sws.org> wrote:
>
> On 19 Jun 2014, at 12:03, Sanjib Das <cnt.sanjib at googlemail.com> wrote:
>
>> • RELEASE_LATENCY: Used to measure the difference between when a timer should have fired, and when it actually did fire. In contrast to all other time stamps, this overhead is directly measured in nanoseconds(*********and not in processor cycles as the other overheads********).
>>
>>
>> Except RELEASE_LATENCY, are all the others measured in milliseconds ?
>>
>> Or how can I determine that .
>
> See above. Feather-Trace reports processor cycles.
>
> - Björn
>
Hi Sanjib,
Digging into the mailing list archives… https://lists.litmus-rt.org/pipermail/litmus-dev/2014/000964.html
Check out the section labeled “Overheads.” Overheads should be in microseconds.
Regarding the particular measurements that you mention:
CXS_task_Avg_Avg: The average of average context switch overhead measured in each tested task set. If you’re interested in average case overheads, this is probably the one that you want.
CXS_task_Min_Min: The minimum context switch overhead observed across all run task sets.
CXS_task_Min_Avg: The minimum average context switch overhead measured across tested task sets.
CXS_task_Max_Max and CXS_task_Max_Avg: Like the corresponding Min types, but for maximum values.
CXS_task_Var_Avg: The variance in context switch averages measured within each task set. (???)
The experiment-scripts auto-generates these graphs, so the graphs are very simple and they lack labels. Please note that the generated graphs do not always make sense—the scripts aren’t smart enough to understand the context of the data. From the scripts’s point of view, it was given data and told to plot it. If you want to perform deeper analysis or create prettier graphs, you’ll have to process the data from the csv files stored in the directory hierarchy output by the scripts.
Finally, as noted above, release latency is measured by litmus in nanoseconds. I think that there may be a bug in experiment-scripts. I think it probably thinks it is in cycles as well. I’ll look into this.
-Glenn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20140619/19f83b5b/attachment.html>
More information about the litmus-dev
mailing list