[LITMUS^RT] Strange Response Time. It various if I assign different periods.

Glenn Elliott gelliott at cs.unc.edu
Fri Mar 4 06:28:55 CET 2016


> On Mar 3, 2016, at 5:23 PM, Shuai Zhao <zs673 at york.ac.uk> wrote:
> 
> Hi
> 
> I am recently play around with Litmus and found a strange problem when collecting the response time: The response time of tasks various when they wait for a different period. 
> 
> Here is the code (it is actually the base_mt_task.c and runs under P-FP scheduling):
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <sys/types.h>
> #include <pthread.h>
> #include <unistd.h>
> #include <litmus.h>
> #include <time.h>
> 
> #define EXECUTIONS 10000
> 
> int cost;
> int period;
> int deadline;
> int count_first;
> long long sum;
> 
> void* rt_thread(void*);
> int main(int argc, char** argv) {
> 	double avg;
> 	pthread_t *task;
> 
> 	cost = atoi(argv[1]);
> 	period = atoi(argv[2]);
> 	deadline = atoi(argv[3]);
> 	count_first = 0;
> 
> 	init_litmus();
> 	be_migrate_to_domain(1);
> 
> 	task = malloc(sizeof(pthread_t));
> 	pthread_create(task, NULL, rt_thread, NULL);
> 	pthread_join(task[0], NULL);
> 
> 	avg = sum / EXECUTIONS;
> 	printf("task executes %d times, exec_avg1: %20.5f\n", EXECUTIONS, avg);
> 
> 	free(task);
> 
> 	return 0;
> }
> 
> #define NUMS 4096
> static int num[NUMS];
> static int loop_once(void) {
> 	int i, j = 0;
> 	for (i = 0; i < NUMS; i++)
> 		j += num[i]++;
> 	return j;
> }
> 
> void* rt_thread(void* xxx) {
> 	struct rt_task param;
> 	int x;
> 	struct timespec start, end;
> 
> 	be_migrate_to_domain(3);
> 	init_rt_task_param(&param);
> 
> 	param.priority = 10;
> 	param.cpu = 3;
> 
> 	param.exec_cost = cost;
> 	param.period = period;
> 	param.relative_deadline = deadline;
> 
> 	param.budget_policy = NO_ENFORCEMENT;
> 	param.cls = RT_CLASS_HARD;
> 
> 	init_rt_thread();
> 	set_rt_task_param(gettid(), &param);
> 	task_mode(LITMUS_RT_TASK);
> 
> 	do {
> 		sleep_next_period();
> 
> 		clock_gettime(CLOCK_REALTIME, &start);
> 		for (x = 0; x < 500; x++)
> 			loop_once();
> 		clock_gettime(CLOCK_REALTIME, &end);
> 
> 		sum += (end.tv_sec * 1000000000 + end.tv_nsec) - (start.tv_sec * 1000000000 + start.tv_nsec);
> 
> 		count_first++;
> 	} while (count_first < EXECUTIONS);
> 
> 	task_mode(BACKGROUND_TASK);
> 	return NULL;
> }
> 
> 
> 
> In the program we have one rt thread on core 3 that execute the cpu cycle consuming function 500 times on each release and the task will be released 10000 times (EXECUTIONS). During each release, we gather the response time by clock_gettime call.
> 
> I noticed that the response time of the task various when we set different periods. Here is the result that I gathered (deadline and cost are equal to period, task is hard real-time):
> 
> PERIOD (nano)    RESPONSE_TIME (nano, average value)
> 10                       1662729
> 1000000              1662755
> 2000000              1718819
> 5000000              1713594
> 10000000            1710004
> 
> I assigned 5 different periods to the task and gathered its response time. As we see, under period 10 nano second and 1 millisecond, the response time is around 1.66 ms. Yet it becomes 1.71 ms if we assign the period to 2ms, 5ms and 10ms.
> Also, apparently under 10 ns and 1 ms, the task will miss deadline. But this should not affect the execution time, right? 
> 
> This is quite weird and I cannot understand. I guess there should be something that I missed up with. But I failed to see it. 
> 
> Would you please help me and have a look at it?
> 
> Thanks in advance.
> 
> Best wishes
> Shuai


Hi Shuai,

Looking at your code, it seems that you are merely measuring the time it takes to complete the for-loop (including preemptions).  This would be invariant of your period, yes?  On an idle system (or one where your task is the only real-time thread running), I would expect response times to be approximately the same.  Moreover, you have budget enforcement disabled, so it’s not as if a metered budget allocation would slow the completion of the for-loop.

What you probably want is to read the job release time out of Litmus’s per-task control page (a page of memory shared by both the kernel and the task that holds some statistics) and compare this value against Linux’s monotonic clock after the for-loop has completed.  The liblitmus API can help you do this.

Perhaps you want something like this:

for (…) {}
clock_gettime(CLOCK_MONOTONIC, &end);
lt_t end_ns = /* convert timepsec to nanoseconds */
lt_t reponse_time = end_ns - get_ctrl_page()->release;  // note: first call to get_ctrl_page() may result in a syscall

get_ctrl_page(): https://github.com/LITMUS-RT/liblitmus/blob/master/src/kernel_iface.c#L154
struct control_page: https://github.com/LITMUS-RT/litmus-rt/blob/master/include/litmus/rt_param.h#L110

-Glenn





More information about the litmus-dev mailing list