<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Mikyung,<div class=""><br class=""></div><div class="">I have comments inline below.<br class=""><div><br class=""></div><div>-Glenn</div><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 5, 2014, at 4:59 PM, Mikyung Kang <<a href="mailto:mkkang01@gmail.com" class="">mkkang01@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Thanks, <span style="font-family:arial,sans-serif;font-size:13px" class="">Björn. After changing from rtspin to </span>rt_launch, I could see that there are no missing records w/o changing anything.<div class=""><br class=""></div><div class=""><br class=""></div><div class="">I have 3 simple questions about the st_job_stats data. Any comments are welcome!</div><div class=""><br class=""></div><div class=""><div class="">*** Example: 8*(Period, WCET)=8*(200,180)ms on 8 Cores (both bare-metal and VM cases) [8 "same" tasks using rt_launch]</div></div><div class=""><br class=""></div><div class="">Using st_job_stats, I could see [Task, Job, Period, Response, DL Miss?, Lateness, Tardiness] records. <br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">(1) Some files describe right period (200ms) but some files describe 0 period as follows. Does it mean that PID#13162 is not schedulable and PID#13166 is only schedulable? But, the Lateness/DL_Miss? of PID#13162 shows no deadline missing.</div><div class=""><br class=""></div><div class=""><div class=""># task NAME=<unknown> PID=13162 COST=0 PERIOD=0 CPU=-1</div><div class=""> 13162, 2, 0, 180031469, 0, -19968531, 0</div><div class=""> 13162, 3, 0, 180026058, 0, -19973942, 0</div><div class=""> 13162, 4, 0, 180029476, 0, -19970524, 0</div><div class=""> 13162, 5, 0, 180027542, 0, -19972458, 0</div><div class="">....</div><div class=""><br class=""></div><div class=""><div class=""># task NAME=rt_launch PID=13166 COST=180000000 PERIOD=200000000 CPU=0</div><div class=""> 13166, 2, 200000000, 180019319, 0, -19980681, 0</div><div class=""> 13166, 3, 200000000, 180022003, 0, -19977997, 0</div><div class=""> 13166, 4, 200000000, 180022586, 0, -19977414, 0</div><div class=""> 13166, 5, 200000000, 180021609, 0, -19978391, 0</div></div></div></div></div></blockquote><div><br class=""></div><div>It looks like COST is also zero. This information is recorded in the “st_param_data” struct. There should be one per real-time task. Make sure that you begin tracing events _before_ launching any real-time tasks. You may want to sleep for a second or two between the commencement of tracing and launching of real-time tasks. If you are doing this, can you confirm whether or not st_param_data records are missing for the tasks with reported zero COST and PERIOD?</div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><div class="">(2) When I checked the total lines (total number of jobs) for each PID, each task has the exactly same number of jobs in some cases, but sometimes the number of jobs is slightly different among 8 tasks as follows. Is this expected or not? There is no missed record among total lines. Some tasks have 1 or 2 more jobs. Is it possible?</div><div class=""><br class=""></div><div class="">116 116 115 115 114 115 114 114</div></div></div></div></blockquote><div><br class=""></div><div>rtspin executes for a configured duration of time, not for a configured number of jobs. Due to the various sources of “noise" in the system, you may observe slight variations in the number of completed jobs. You can modify the rtspin source code to compute the number of jobs that should be executed within the configured time interval (i.e., njobs = duration / period) and then execute that many jobs, instead of exiting with the elapsed time.</div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><div class="">(3) I want to repeat test-case 20 times and then average their schedulability. In either case (whether including period=0 jobs are included to scheduled job or not), I could see that inter-run variation happened a lot as follows. Is this expected or not? Can you get consistent traced records (consistent fraction of schedulable task sets) any time??</div><div class=""> </div><div class="">1.00 1.00 1.00 1.00 1.00 .13 1.00 1.00 1.00 .13 .13 1.00 .25 .13 .13 .13 .13 1.00 .25 1.00 </div></div></div></div></blockquote><div><br class=""></div><div>What is the task set utilization? Which scheduler do you use? Under partition scheduling, you can still over-utilize a single processor even if task set utilization is not much more than 1.0 when your task partitioning is too imbalanced. That is, you can overload one partition while all others are idle. Also, LitmusRT, being based upon Linux, may not support hard real-time scheduling all that well when task set utilization is high. You may observe deadline misses from time to time. You may want to examine the maximum amount by which a deadline is missed (perhaps normalized by relative deadline or period), rather than whether a deadline was ever missed. </div><div><br class=""></div><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class="">Could you please comment for those 3 questions or even 1?</div><div class="">Thanks for your help in advance!</div><div class=""><br class=""></div><div class="">Mikyung</div></div><div class=""><br class=""></div><div class=""><br class=""></div></div></div></div><div class="gmail_extra"><br class=""><div class="gmail_quote">On Fri, Sep 19, 2014 at 4:21 AM, Björn Brandenburg <span dir="ltr" class=""><<a href="mailto:bbb@mpi-sws.org" target="_blank" class="">bbb@mpi-sws.org</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br class="">
On 18 Sep 2014, at 06:29, Mikyung Kang <<a href="mailto:mkkang01@gmail.com" class="">mkkang01@gmail.com</a>> wrote:<br class="">
><br class="">
> I'm trying to get whole tracing information of RT task sets using LITMUS-RT Version 2014.1.<br class="">
><br class="">
> * System has 8 Cores, no hyper-threading, 16G memory<br class="">
> * Tested both Bare-metal case and Virtualization case (Xen): similar result<br class="">
> * Ubuntu 12.04 (Linux 3.10.5)<br class="">
> * Generated 10 tasks w/ Utilization=[1.0, 8.0] using rtspin<br class="">
> * Run 10 seconds using GSN-EDF scheduler<br class="">
><br class="">
> When I spawned only 1 task (Period=100ms, WCET=10ms) during 10 seconds, all records are being saved into .bin file correctly w/o missing records.<br class="">
> But, more than 1 task, always records are being missed a lot.<br class="">
<br class="">
</span>This sounds like something is broken. Even with 8x(10, 100) tasks you should have no tracing problems at all as there should be more than enough time for st_trace to catch up. Your system must be overutilized somehow.<br class="">
<span class=""><br class="">
><br class="">
> To avoid record-loss, I tried the following options based on the thread: <a href="https://lists.litmus-rt.org/pipermail/litmus-dev/2013/000480.html" target="_blank" class="">https://lists.litmus-rt.org/pipermail/litmus-dev/2013/000480.html</a>.<br class="">
><br class="">
> * Kernel config: CONFIG_SCHED_TASK_TRACE_SHIFT=13 (up to 8K events)<br class="">
> * Used /dev/shm/* instead of disk for the binary record file<br class="">
> * Removed unnecessary events for the calculation of deadline miss ratio (switch_to/from, block, resume, action, np_enter/exit)<br class="">
> * Current KERNEL_IMAGE_SIZE 512*1024*1024<br class="">
><br class="">
> Then, around 4K events are being saved into one task-assigned core (st-*.bin).<br class="">
> When I got the information through st_job_stats, I could see that the number of recorded events per task is very different even though tasks have the same period.<br class="">
> Moreover, usually 5~20% records are being missed for each task set, even though utilization is very low. Sometimes, more than that.<br class="">
<br class="">
</span>This indicates that your system suffers from intervals of overload during which the tracing tools are starved. Are you sure this happens already with only two tasks?<br class="">
<span class=""><br class="">
><br class="">
> Is this expected record-loss ratio using st_trace tool?<br class="">
<br class="">
</span>No, unless the st_trace tool is being starved there should be no records lost.<br class="">
<span class=""><br class="">
> What should I check more? Is there any other way to reduce/remove record-loss?<br class="">
<br class="">
</span>You can try editing litmus/Kconfig to raise the limit for CONFIG_SCHED_TASK_TRACE_SHIFT. You can also try running st_trace as a real-time task (with rt_launch).<br class="">
<br class="">
- Björn<br class="">
<br class="">
<br class="">
_______________________________________________<br class="">
litmus-dev mailing list<br class="">
<a href="mailto:litmus-dev@lists.litmus-rt.org" class="">litmus-dev@lists.litmus-rt.org</a><br class="">
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" target="_blank" class="">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br class="">
</blockquote></div><br class=""></div>
_______________________________________________<br class="">litmus-dev mailing list<br class=""><a href="mailto:litmus-dev@lists.litmus-rt.org" class="">litmus-dev@lists.litmus-rt.org</a><br class="">https://lists.litmus-rt.org/listinfo/litmus-dev<br class=""></div></blockquote></div><br class=""></div></body></html>