<div class="gmail_quote">Dear Litmus developers,<br><br>Litmus-RT papers usually include schedulability comparisons that use schedulability tests and tasks inflated with overhead costs.<br>Is it possible to check schedulability by actually running the tasks?<br>
<br>I've been trying to modify rtspin to do that, but I'm facing some problems.<br><br><br>In rtspin, there's this job() function, which is called with 90% of the WCET, instead of 100%. <br><br><pre style="font-family:arial,helvetica,sans-serif">
<font>static int job(double exec_time)
{
loop_for(exec_time);
sleep_next_period();
return 0;
}</font></pre>With 90%, a task seldom misses a deadline, so the program runs as it is supposed to.<br>But to check schedulability, I had to make the tasks run with 100% utilization.<br><br>When we change to 100%, though, a problem starts to happen. The first job runs for longer than the WCET (inside loop_for()), so it leaves some code to be executed in the second job.<br>
But then we get to sleep_next_period(). The task is blocked until next period, so every second job of the task doesn't run. Please have a look at the screenshot.<br><br><br>That happens because the application and the scheduler are independent. But that's a normal behaviour, because applications aren't supposed to run for longer than the WCET.<br>
The scheduler doesn't know if the application is tardy.<br><br>I was thinking about removing sleep_next_period() and putting the program to access memory continuously. I could export the out_of_time flag to the application and keep checking it.<br>
But that way I would consider only the overheads visible to the scheduler, ignoring cache-related overheads for example, because for the scheduler the task would always run for "x" time units.<br><br>Has anyone already done schedulability checking inside the application? I thought it would be easier to do than embedding overhead costs within the tasks.<br>
In case it is a bad approach, I can try the usual way.<br><br>Thanks,<br>Felipe<br>
</div><br>