[LITMUS^RT] 0 period, negative lateness for recorded tasks -- Sisu

Sisu Xi xisisu at gmail.com
Sun May 5 21:11:44 CEST 2013


Hi, Björn:

Thanks for your reply!

On Sun, May 5, 2013 at 3:19 AM, Björn Brandenburg <bbb at mpi-sws.org> wrote:

> On May 5, 2013, at 1:09 AM, Sisu Xi <xisisu at gmail.com> wrote:
> >
> > I was trying to run some experiments in Litmus in a VM. However, all the
> number I got was negative for the task lateness.
> > And all jobs are showing 0 for the period, and shows CPU=-1.
>
> Hi Sisu,
>
> these are two unrelated issues. First, negative lateness is perfectly
> fine. It just means that no deadlines were missed.
>
> oh, yes. That make sense.


> > Here is the scripts to run experiments:
> > root at litmus1:~# cat 200_1_MPR_Dom1.sh
> > rtspin -s 0.98 8 10 100 &
> > rtspin -s 0.98 8 10 100 &
> > rtspin -s 0.98 8 10 100 &
> > rtspin -s 0.98 8 10 100 &
> > rtspin -s 0.98 8 10 100 &
> > st_trace 200_1_MPR_Dom1
> > killall rtspin
>
> This is the wrong order. When a task is launched, its parameters and PID
> are written to the sched_trace stream. If no tracer is present, the
> information is discarded.  You need to start tracing prior to launching
> real time tasks.
>
>
You are right, after I change the order, the task information is shown. And
I will use process multiple files.


> >
> > However, here is the result using st_job_stats
> > root at litmus1:~# st_job_stats st-200_1_MPR_Dom1-0.bin | head
>
> You always need to look at all trace files together. Don't just look at
> one core's file, as tasks may migrate under global schedulers, and since
> release events may happen on any core (unless you are careful with the
> interrupt assignment).
>
>
I am not doing anything specific to the interrupt assignment. Just leave it
as default.



> > And the data on other cpus shows the same negative value for lateness..
>
> Again, this is perfectly fine. lateness = absolute finish time - absolute
> deadline
>
> >
> > In Xen, I am using tsc_mode=1, which is:
> > - tsc_mode=1 (always emulate). All rdtsc instructions are emulated;
> >    this is the best choice when TSC-sensitive apps are running and
> >    it is necessary to understand worst-case performance degradation
> >    for a specific hardware environment.
> >
> > this should be good for tsc-sensitive applications..
> >
> > Any ideas?
>
> I'd be *very* careful in making any claims about the accuracy of timing in
> a VM. Are you 100% sure that there are no timing glitches due to the
> emulation? Are you sure that overhead measurements based on emulated TSCs
> even meaningfully reflect the actual overheads?
>
>
Sure, I will double check that. thanks for the advise.


> We have never used LITMUS^RT in a VM for benchmarking (we run LITMUS^RT in
> VMs primarily to aid with debugging). If you want good measurements, it's
> better to run the kernel on bare metal.
>
>
Sure. I understand that. But it would not hurt to try it and see its
results, right?  :)



> - Björn
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>



-- 
Sisu Xi, PhD Candidate

http://www.cse.wustl.edu/~xis/
Department of Computer Science and Engineering
Campus Box 1045
Washington University in St. Louis
One Brookings Drive
St. Louis, MO 63130
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20130505/1c97d532/attachment.html>


More information about the litmus-dev mailing list