[LITMUS^RT] 0 period, negative lateness for recorded tasks -- Sisu

Björn Brandenburg bbb at mpi-sws.org
Sun May 5 10:19:37 CEST 2013


On May 5, 2013, at 1:09 AM, Sisu Xi <xisisu at gmail.com> wrote:
> 
> I was trying to run some experiments in Litmus in a VM. However, all the number I got was negative for the task lateness.
> And all jobs are showing 0 for the period, and shows CPU=-1.

Hi Sisu,

these are two unrelated issues. First, negative lateness is perfectly fine. It just means that no deadlines were missed.

> Here is the scripts to run experiments: 
> root at litmus1:~# cat 200_1_MPR_Dom1.sh 
> rtspin -s 0.98 8 10 100 &
> rtspin -s 0.98 8 10 100 &
> rtspin -s 0.98 8 10 100 &
> rtspin -s 0.98 8 10 100 &
> rtspin -s 0.98 8 10 100 &
> st_trace 200_1_MPR_Dom1 
> killall rtspin

This is the wrong order. When a task is launched, its parameters and PID are written to the sched_trace stream. If no tracer is present, the information is discarded.  You need to start tracing prior to launching real time tasks.

> 
> However, here is the result using st_job_stats
> root at litmus1:~# st_job_stats st-200_1_MPR_Dom1-0.bin | head

You always need to look at all trace files together. Don't just look at one core's file, as tasks may migrate under global schedulers, and since release events may happen on any core (unless you are careful with the interrupt assignment).

> And the data on other cpus shows the same negative value for lateness..

Again, this is perfectly fine. lateness = absolute finish time - absolute deadline

> 
> In Xen, I am using tsc_mode=1, which is:
> - tsc_mode=1 (always emulate). All rdtsc instructions are emulated;
>    this is the best choice when TSC-sensitive apps are running and
>    it is necessary to understand worst-case performance degradation
>    for a specific hardware environment.
> 
> this should be good for tsc-sensitive applications..
> 
> Any ideas?

I'd be *very* careful in making any claims about the accuracy of timing in a VM. Are you 100% sure that there are no timing glitches due to the emulation? Are you sure that overhead measurements based on emulated TSCs even meaningfully reflect the actual overheads?

We have never used LITMUS^RT in a VM for benchmarking (we run LITMUS^RT in VMs primarily to aid with debugging). If you want good measurements, it's better to run the kernel on bare metal.

- Björn





More information about the litmus-dev mailing list