<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 12 Jan 2016, at 04:18, Yu-An(Victor) Chen <<a href="mailto:chen116@usc.edu" class="">chen116@usc.edu</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><font class="">Hi,</font><div class=""><font class=""><br class=""></font></div><div class=""><font class="">I am doing some experiment with rt-xen and litmus-rt. What I am trying to do is to see the <span style="font-size:12.8px" class="">schedubility of real time tasks of 1vm while the other vm is fully utilized. Both guest VMs use litmus-rt.</span></font></div><div class=""><span style="font-size:12.8px" class=""><font class=""><br class=""></font></span></div><div class=""><span style="font-size:12.8px" class=""><font class="">The setup is the following:</font></span></div><div class=""><span style="font-size:12.8px" class=""><font class=""><br class=""></font></span></div><div class=""><div class=""><span style="font-size:12.8px" class=""><font class="">Using Xen 4.5.0. </font></span></div><div class=""><span style="font-size:12.8px" class=""><font class="">1. 2vm sharing core 0-7 ( both vm can access core0-7) , with RTDS scheduler, both has period of 4000us and budget of 2000(us) </font></span></div><div class=""><span style="font-size:12.8px" class=""><font class="">2. Dom0 using one core from CPU 8-15, with RTDS scheduler, period of 10000us and budget of 10000us</font></span></div><div class=""><font class=""><span style="font-size:12.8px" class="">3. both guest vm have ubuntu 12.04 and are using "</span><span class="" style="font-size:12.8px">litmus</span><span style="font-size:12.8px" class="">-</span><span class="" style="font-size:12.8px">rt</span><span style="font-size:12.8px" class="">-2014.2.patch" and with Geoffrey's patch for IPI interrupt (</span><a href="https://github.com/LITMUS-RT/liblitmus/pull/1/files" target="_blank" style="font-size:12.8px" class="">https://github.com/<span class="">LITMUS</span>-<span class="">RT</span>/liblitmus/pull/1/files</a><span style="font-size:12.8px" class="">)</span></font></div></div><div class=""><span style="font-size:12.8px" class=""><font class=""><br class=""></font></span></div><div class=""><span style="font-size:12.8px" class=""><font class="">the taskset is generated as followed:</font></span></div><div class=""><span style="font-size:12.8px" class=""><font class=""><br class=""></font></span></div><div class=""><font class=""><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">a taskset is composed of </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">a collection of real-time tasks, and each real-time task is a sequence of jobs that are released periodically... All jobs are periodic, where each job </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">T</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">is defined by a period (and deadline) </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">p</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">and a worse-case execution time </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">e</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i</span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">, with </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">p</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">≥ </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">e</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">≥ </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">0 </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">and </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">p</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i</span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">, e</span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:-1pt" class="">i </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">∈ </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">integers. Each job is </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">comprised of a number of iterations of floating point operations during each job. This is based on the base task.c provided with the <span class="">LITMUS</span></span><span style="font-family:arial,helvetica,sans-serif;font-size:7pt;vertical-align:4pt" class=""><span class="">RT</span> </span><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class="">userspace library.</span><span style="font-size:12.8px" class=""><br class=""></span></font></div><div class=""><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class=""><font class=""><br class=""></font></span></div><div class=""><font face="arial, helvetica, sans-serif" class=""><span style="font-size:13.3333px" class="">The period for a task is from uniform distribution (10ms,100ms)</span></font></div><div class=""><font face="arial, helvetica, sans-serif" class=""><span style="font-size:13.3333px" class="">and the utilization rate of a task is also from a uniform distribution (0.1,0.4)</span></font></div><div class=""><span style="font-family:arial,helvetica,sans-serif;font-size:10pt" class=""><font class=""><br class=""></font></span></div><div class=""><font face="arial, helvetica, sans-serif" class=""><span style="font-size:13.3333px" class="">In my experiment:</span></font></div><div class=""><div class=""><font class=""><br class=""></font></div><div class=""><font class="">Step0: disable networking and other unused services.</font></div><div class=""><font class="">Step1: I loaded VM#2 with constant running task with total utilization of 4 cores.</font></div><div class=""><font class="">Step2: In VM#1 I many run iterations of tasksets from total utilization rate 0.2 cores all the way to 4.6 and record their schedulbility using st_trace. </font></div></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">In my results, I do see that schedulbility do drop to zeros at either total util rate of 4.2 or 4.4. (we use worst-case execution time for benchmarking the base amount of computation that is why it takes more than total util rate of 4 for schedubility to drop to 0)</font></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">What puzzle me is why are there two groups of results as shown in the attached graph?(one group that has schedubility of 0 at total util rate of 4.2 and another group that has schedubility of 0 at total util rate of 4.4) ( I used " * " in the legend to indicate the groups)</font></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">Shouldn't each run should be somewhat close to each other or showing randomness instead of seeing the two groups of performance curves?</font></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">I wonder if my base computation is wrong but that still does not explain why they are two types of performance curves. </font></div><div class=""><font class=""><br class=""></font></div><div class=""><font class="">Any advice or suggestion on how I can go about this will be helpful!</font></div></div></div></blockquote><br class=""></div><div>Dear Chen,</div><div><br class=""></div><div>thanks for your interest in LITMUS^RT. Concerning your observations, I can’t say for sure what’s going on, but a couple of issues stand out that you might want to consider.</div><div><br class=""></div><div>First, schedulability is an analytical property that you cannot measure by observation. You can only observe the *lack* of schedulability (i.e., deadline misses), just like you cannot establish correctness by testing.</div><div><br class=""></div><div>So based on your description, the question really is “why do we observe more deadline misses in some VMs than in others”. </div><div><br class=""></div><div>One possibility is that actually VMs in both groups are “equally schedulable”, but that some of them are “getting lucky” in your experiment. That is, you may not *observe* deadline misses in some workloads, even though they were actually not schedulable.</div><div><br class=""></div><div>There are also a couple of other possible causes.</div><div><br class=""></div><div>- Do you control page coloring? If not, some of your processes may be subject to more cache misses and/or cache interference than others, which would affect their execution times, which in turn could translate into higher or lower likelihood to miss a deadline.</div><div><br class=""></div><div>- As far as I know, Xen uses coarse-grained resource accounting, based a period tick with some coarse resolution. Your VMs might not actually getting precisely what you allocated to them. Maybe RT-Xen fixed this, I don’t know. (LITMUS^RT uses fine-grained accounting based on one-shot timers.)</div><div><br class=""></div><div>- Cache interference etc. will drive up execution costs. So if you calibrated your “burn CPU time” loop in isolation, your tasks might take longer when run in parallel with contention on other cores. Make sure you log actual execution times with sched_trace to see if you are actually getting what you wanted.</div><div><br class=""></div><div>- Of course, there could also be some bug in LITMUS^RT, but based on your description we don’t have enough detail to suspect anything in particular. When suspecting a bug in LITMUS^RT, please reproduce the problem on bare metal first.</div><div><br class=""></div><div>I hope this gives you some pointers to investigate the issue. Please let us know what you find out.</div><div><br class=""></div><div>Regards,</div><div>Björn</div><div><br class=""></div><div><br class=""></div><div>PS: please make sure you are subscribed to the list before posting to it. Thanks.</div><br class=""></body></html>