[LITMUS^RT] Strange Response Time. It various if I assign different periods.

Meng Xu xumengpanda at gmail.com
Fri Mar 4 02:48:39 CET 2016


Response time != WCET.

When system is overloaded, the new job won't be served until the old
job finishes its execution. That's why you see the various WCET when
period < wcet. That's very expected result. If you draw the timeline
of the task, you will find why the wcet vary when system is
overloaded.

IIRC, litmus should allow to enforce the budget, which will make sure
the execution won't take more than the  budget you assign to it and
will kill the previous job if it misses ddl.

Meng
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/


On Thu, Mar 3, 2016 at 8:23 PM, Shuai Zhao <zs673 at york.ac.uk> wrote:
> Hi
>
> I am recently play around with Litmus and found a strange problem when
> collecting the response time: The response time of tasks various when they
> wait for a different period.
>
> Here is the code (it is actually the base_mt_task.c and runs under P-FP
> scheduling):
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <sys/types.h>
> #include <pthread.h>
> #include <unistd.h>
> #include <litmus.h>
> #include <time.h>
>
> #define EXECUTIONS 10000
>
> int cost;
> int period;
> int deadline;
> int count_first;
> long long sum;
>
> void* rt_thread(void*);
> int main(int argc, char** argv) {
> double avg;
> pthread_t *task;
>
> cost = atoi(argv[1]);
> period = atoi(argv[2]);
> deadline = atoi(argv[3]);
> count_first = 0;
>
> init_litmus();
> be_migrate_to_domain(1);
>
> task = malloc(sizeof(pthread_t));
> pthread_create(task, NULL, rt_thread, NULL);
> pthread_join(task[0], NULL);
>
> avg = sum / EXECUTIONS;
> printf("task executes %d times, exec_avg1: %20.5f\n", EXECUTIONS, avg);
>
> free(task);
>
> return 0;
> }
>
> #define NUMS 4096
> static int num[NUMS];
> static int loop_once(void) {
> int i, j = 0;
> for (i = 0; i < NUMS; i++)
> j += num[i]++;
> return j;
> }
>
> void* rt_thread(void* xxx) {
> struct rt_task param;
> int x;
> struct timespec start, end;
>
> be_migrate_to_domain(3);
> init_rt_task_param(&param);
>
> param.priority = 10;
> param.cpu = 3;
>
> param.exec_cost = cost;
> param.period = period;
> param.relative_deadline = deadline;
>
> param.budget_policy = NO_ENFORCEMENT;
> param.cls = RT_CLASS_HARD;
>
> init_rt_thread();
> set_rt_task_param(gettid(), &param);
> task_mode(LITMUS_RT_TASK);
>
> do {
> sleep_next_period();
>
> clock_gettime(CLOCK_REALTIME, &start);
> for (x = 0; x < 500; x++)
> loop_once();
> clock_gettime(CLOCK_REALTIME, &end);
>
> sum += (end.tv_sec * 1000000000 + end.tv_nsec) - (start.tv_sec * 1000000000
> + start.tv_nsec);
>
> count_first++;
> } while (count_first < EXECUTIONS);
>
> task_mode(BACKGROUND_TASK);
> return NULL;
> }
>
>
>
> In the program we have one rt thread on core 3 that execute the cpu cycle
> consuming function 500 times on each release and the task will be released
> 10000 times (EXECUTIONS). During each release, we gather the response time
> by clock_gettime call.
>
> I noticed that the response time of the task various when we set different
> periods. Here is the result that I gathered (deadline and cost are equal to
> period, task is hard real-time):
>
> PERIOD (nano)    RESPONSE_TIME (nano, average value)
> 10                       1662729
> 1000000              1662755
> 2000000              1718819
> 5000000              1713594
> 10000000            1710004
>
> I assigned 5 different periods to the task and gathered its response time.
> As we see, under period 10 nano second and 1 millisecond, the response time
> is around 1.66 ms. Yet it becomes 1.71 ms if we assign the period to 2ms,
> 5ms and 10ms.
> Also, apparently under 10 ns and 1 ms, the task will miss deadline. But this
> should not affect the execution time, right?
>
> This is quite weird and I cannot understand. I guess there should be
> something that I missed up with. But I failed to see it.
>
> Would you please help me and have a look at it?
>
> Thanks in advance.
>
> Best wishes
> Shuai
>
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>




More information about the litmus-dev mailing list