[LITMUS^RT] Missing st_trace records

Mikyung Kang mkkang01 at gmail.com
Thu Nov 6 18:00:10 CET 2014


Thanks a lot, Glenn and Björn!

I'm using GSN-EDF and rt_launch (8 cores, tested on bare-metal and VM).
The result was similar when using C-EDF (partition/cluster is not setup) or
using different number of tasks, or different utilization.

After getting the script from the web (http://pastebin.com/2acRriVs), I
changed only rtspin to rt_launch.

==============================================
#!/bin/bash

#RTSPIN="/root/liblitmus/rtspin"
RELEASETS="/root/liblitmus/release_ts"
ST_TRACE="/root/ft_tools/st_trace"
RTLAUNCH="/root/liblitmus/rt_launch"
PROG=$1
SPIN_PIDS=""

#SchedNames="GSN-EDF
#C-EDF"
SchedNames="GSN-EDF"

for sched in $SchedNames
do

for rep in 1 2 3 4 5 6 7 8 9 10
do
echo "Starting st_trace"
${ST_TRACE} -s mk &
ST_TRACE_PID="$!"
echo "st_trace pid: ${ST_TRACE_PID}"
sleep 1

echo "Switching to $sched plugin"
echo "$sched" > /proc/litmus/active_plugin
sleep 1

echo "Setting up rtlaunch processes"
n=8
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 180 200
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 159 177
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 149 166
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 139 155
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 130 144
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 120 133
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 110 122
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
numactl --physcpubind=8-15 --cpubind=1 --membind=1 $RTLAUNCH -c srt 100 111
$PROG &
SPIN_PIDS="$SPIN_PIDS $!"
sleep 1

echo "catting log"
cat /dev/litmus/log > log.txt &
LOG_PID="$!"
sleep 1
echo "Doing release..."
$RELEASETS

echo "Waiting for RT-Launch processes..."
wait ${SPIN_PIDS}
sleep 1
echo "Killing log"
kill ${LOG_PID}
sleep 1
echo "Sending SIGUSR1 to st_trace"
kill -USR1 ${ST_TRACE_PID}
echo "Waiting for st_trace..."
wait ${ST_TRACE_PID}
sleep 1

mkdir run-data/"$sched"_"$n"_$rep
mv /dev/shm/*.bin run-data/"$sched"_"$n"_$rep/
#mv log.txt run-data/"$sched"_$rep/
sleep 1
echo "Done! Collect your logs."

done
done
echo "DONE!"
==============================================

Even though I tried several $PROG, inter-run variation happened.
I pasted the result of 90% utilzation and 45% utilization cases below.
I used different (period,wcet) per task at this time.

(1) 8 * 0.9 utilization/core ==> (4, 8, 5, 3, 2, 4, 8, 4, 1, 7) tasks are
schedulable among 8 tasks. ==> schedulability (0.50, 1.00, 0.63, 0.38,
0.25, 0.50, 1.00, 0.50, 0.13, 0.88)
(2) 5 * 0.9 utilization/core ==> (4, 5, 5, 5, 4, 5, 4, 5, 5, 5) tasks are
schedulable among 5 tasks.
(3) 8 * 0.45 utilization/core ==> (4, 3, 7, 6, 4, 6, 4, 7, 6, 4) tasks are
schedulable among 8 tasks.
(4) 5 * 0.45 utilization/core ==> (5, 2, 3, 4, 2, 3, 5, 4, 4, 3) tasks are
schedulable among 5 tasks.


That is, in case of 1st run of (1), 4 tasks have correct (period, cost) and
the other 4 tasks have (period=0, cost=0).

# Task,   Job,     Period,   Response, DL Miss?,   Lateness,  Tardiness
# task NAME=rt_launch PID=43217 COST=139000000 PERIOD=155000000 CPU=0
 43217,     2,  155000000,  139123829,        0,  -15876171,          0
 43217,     3,  155000000,  139120144,        0,  -15879856,          0
 43217,     4,  155000000,  139121134,        0,  -15878866,          0
 43217,     5,  155000000,  139113222,        0,  -15886778,          0
...

# task NAME=<unknown> PID=43215 COST=0 PERIOD=0 CPU=-1
 43215,     2,          0,  159116315,        0,  -17883685,          0
 43215,     3,          0,  159133056,        0,  -17866944,          0
 43215,     4,          0,  159121530,        0,  -17878470,          0
 43215,     5,          0,  159130841,        0,  -17869159,          0
 43215,     6,          0,  159142111,        0,  -17857889,          0

Any hint/comments are welcome! I'll appreciate it!

Thanks,
Mikyung



On Thu, Nov 6, 2014 at 1:29 AM, Björn Brandenburg <bbb at mpi-sws.org> wrote:

>
> On 06 Nov 2014, at 03:41, Glenn Elliott <gelliott at cs.unc.edu> wrote:
>
> (3) I want to repeat test-case 20 times and then average their
> schedulability. In either case (whether including period=0 jobs are
> included to scheduled job or not), I could see that inter-run variation
> happened a lot as follows. Is this expected or not? Can you get consistent
> traced records (consistent fraction of schedulable task sets) any time??
>
> 1.00 1.00 1.00 1.00 1.00 .13 1.00 1.00 1.00 .13 .13 1.00 .25 .13 .13 .13
> .13 1.00 .25 1.00
>
>
> Is this data for one task, or for the entire task set? What exactly are
> these numbers—deadline miss ratios?
>
> What is the task set utilization?  Which scheduler do you use?  Under
> partition scheduling, you can still over-utilize a single processor even if
> task set utilization is not much more than 1.0 when your task partitioning
> is too imbalanced.  That is, you can overload one partition while all
> others are idle. Also, LitmusRT, being based upon Linux, may not support
> hard real-time scheduling all that well when task set utilization is high.
> You may observe deadline misses from time to time.  You may want to examine
> the maximum amount by which a deadline is missed (perhaps normalized by
> relative deadline or period), rather than whether a deadline was ever
> missed.
>
>
> Also, if your system is NOT schedulable, under the implemented scheduling
> policies there is no guarantee (not just in LITMUS^RT, but in general) that
> the same task will always incur the same number of deadline misses. Which
> task incurs a deadline misses can depend on details such as the employed
> tie-breaking policy for tasks with equal deadlines, minuscule differences
> in arrival times, etc.
>
> You might want to visualize these schedules and have a look—is there
> something “wrong” in the schedules with high deadline miss ratios, or is
> the task under analysis just “unlucky” in some schedules?
>
> Can you post your scripts for setting up the experiments? If there is an
> issue, this might allow us to reproduce it.
>
> Thanks,
> Björn
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20141106/e26a325f/attachment.html>


More information about the litmus-dev mailing list