<div dir="ltr"><table cellpadding="0" class="" style="font-family:arial,sans-serif;font-size:13px"><tbody><tr class=""><td class="" style="width:476px"><table cellpadding="0" class="" style="width:476px"><tbody><tr><td><div class="">
<span name="Björn Brandenburg" class="" style="font-size:13px">Hi, Björn</span></div></td></tr></tbody></table></td></tr></tbody></table><br><div style>Thanks for your reply! This helps a lot!</div><div style><br></div><div style>
Sisu</div><div style><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Apr 18, 2013 at 12:46 PM, Björn Brandenburg <span dir="ltr"><<a href="mailto:bbb@mpi-sws.org" target="_blank">bbb@mpi-sws.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Apr 17, 2013, at 6:59 PM, Sisu Xi <<a href="mailto:xisisu@gmail.com">xisisu@gmail.com</a>> wrote:<br>
<br>
> Also, wouldn't tracing all these information cause lots of overhead when the system is running? thus interfere with the real-time task?<br>
><br>
> I am thinking maybe we can use the base_task to do this. the process is like this:<br>
><br>
> 1. first pre-calculate a fixed amount of CPU intensive workload on a particular machine, which consumes around 1ms<br>
> 2. allocate a large enough array in base_task to record the job completion time.<br>
> 3. wrap it with a for loop. parse the wcet to base_task, and within base_task, job() function, wrap a for loop to call this job this times. at the end of this for loop, record the time. This is the job completion time.<br>
> 4. given the task start time and period, we know how many job missed its deadline.<br>
><br>
> what do you think about this?<br>
<br>
<br>
</div>Make sure you run any benchmarks WITHOUT debug tracing enabled—debug tracing adds considerable overhead and should never be used to collect any benchmark information. Feather-Trace and sched_trace are much more efficient and should be used for all benchmarking purposes.<br>
<br>
Also, base_task.c is a starting point for custom experiment development—a tutorial, if you will. It is NOT intended to be run for benchmarking purposes, at least not as is. If you need something to consume cycles, use rtspin instead. If you want a meaningful workload, you can base it on base_task.c, but you need to add the actual computation within each job yourself.<br>
<div class="HOEnZb"><div class="h5"><br>
- Björn<br>
<br>
<br>
_______________________________________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Sisu Xi, PhD Candidate<br><br><a href="http://www.cse.wustl.edu/~xis/" target="_blank">http://www.cse.wustl.edu/~xis/</a><br>Department of Computer Science and Engineering<br>
Campus Box 1045<br>Washington University in St. Louis<br>One Brookings Drive<br>St. Louis, MO 63130
</div>