[LITMUS^RT] running a task as its execution time
Glenn Elliott
gelliott at cs.unc.edu
Wed Apr 25 23:25:12 CEST 2012
Giovani,
I believe what you are trying to do is very similar to the rt_spin program in liblitmus/bin/rtspin.c.
You can configure the rt_spin program to execute for a given duration per job release. A job "signals" to the kernel that it has finished its job by calling sleep_next_period(). You can also configure the program to run with or without budget enforcement. Again, however, if a job exceeds its budget before it calls sleep_next_period(), it will resume execution from where it left off. It will then probably call sleep_next_period() very soon, throwing away most of the budget it had received for the new job.
Jonathan Herman may be looking at ways to improve this behavior. However, I don't believe anyone is considering creating a mechanism by which to abort a job upon budget expiration.
You can introduce cancellation points into your code by using liblitmus's get_job_no() ("get job number"). At the start of a job (in user code), you would query and record the current job number. You would check for changes in this number periodically through the execution of your job. Upon job number change before a job has completed, you could "abort" the job by taking appropriate actions in the user space program. If you don't want to throw away budget, you could attempt to start the work for the next job immediately. However, you could run into a situation where jobs NEVER complete, since a portion of the budget could be wastefully used on execution time between cancellation points. I think any more robust solutions would require nasty manipulation of stack pointers, program counters, and signal handlers---interesting, but could be tricky.
-Glenn
On Apr 25, 2012, at 4:56 PM, Giovani Gracioli wrote:
> So is there a way to ensure that the task will execute for its correct execution time?
>
> Giovani
>
> On Wed, Apr 25, 2012 at 5:53 PM, Glenn Elliott <gelliott at cs.unc.edu> wrote:
> A word of warning about using budget enforcement: There is currently no way for the kernel to report back to the user program that the program has exceeded its budget. Your program will simply resume execution from where it left off upon next job release, and not start a new program-level job.
>
> -Glenn
>
>
>
> On Apr 25, 2012, at 3:58 PM, Felipe Cerqueira <felipeqcerqueira at gmail.com> wrote:
>
>> Hi Giovani,
>>
>> I've already faced something like that. The problem is that it's not safe to measure time within the application.
>>
>> Your loop_once() code only keeps iterating and acessing memory. In that case, I think there are only two factors causing the time difference. One of the sources of the difference is that the task may be preempted while you are measuring:
>>
>> loop_start = read_tsc_us(); // reads from the TSC and converts to microsecond
>> tmp += loop_once();
>> -> Suppose the task is preempted right here
>> loop_end = read_tsc_us();
>>
>> Also, system interrupts may have ocurred between the measurements.
>>
>> In order to fix that, the application must communicate with the kernel. But even so, I don't know if it is possible to synchronize perfectly the loop in the code with the execution time of the task. The real-time job is independent of the code it runs.
>>
>> The task will run according to the execution time you provided at the initialization, but you can't check that via the application.
>> The only thing you can change is the WCET enforcement:
>>
>> sporadic_task_ns(ctx->exec_time * 1000, ctx->period * 1000, 0, 0, RT_CLASS_HARD, QUANTUM_ENFORCEMENT, 0);
>>
>> Change to PRECISE_ENFORCEMENT if you want Litmus to use timers to enforce the execution. I'm not so sure, but I think if you keep QUANTUM_ENFORCEMENT, a completed job will only be removed until the next quantum.
>>
>> Best Regards,
>> Felipe
>>
>> 2012/4/25 Giovani Gracioli <giovanig at gmail.com>
>> Hello,
>>
>> I am new at using LITMUS and I am facing some issues that maybe you can help. I want to make sure that a task will execute only for its pre-defined execution time, like this:
>>
>> //exec_time in microseconds
>> int loop_for(unsigned int exec_time)
>> {
>> int tmp = 0;
>> unsigned int elapsed = 0;
>> unsigned long long loop_start = 0, loop_end = 0;
>>
>> while(elapsed < exec_time) {
>> loop_start = read_tsc_us(); // reads from the TSC and converts to microsecond
>> tmp += loop_once();
>> loop_end = read_tsc_us();
>> elapsed = elapsed + (loop_end - loop_start);
>> }
>>
>> printf("elapsed = %d, exec_time = %d\n", elapsed, exec_time);
>>
>> return tmp;
>> }
>>
>> I choose the GSN-EDF scheduler by running "setsched" of liblitmus before running the test application. The problem is that, the most of times, the elapsed time is greater than the execution time. Here is a part of the output:
>>
>> elapsed = 13003, exec_time = 921
>> elapsed = 12999, exec_time = 921
>> elapsed = 13003, exec_time = 921
>> elapsed = 13007, exec_time = 921
>> elapsed = 13007, exec_time = 921
>> elapsed = 12998, exec_time = 921
>> elapsed = 12995, exec_time = 921
>> elapsed = 12999, exec_time = 921
>> elapsed = 12990, exec_time = 921
>> elapsed = 921, exec_time = 921
>> elapsed = 13002, exec_time = 921
>>
>> My guess is that the task is not running as a real-time task. Do I need to define any other parameter for LITMUS?
>>
>> The application source code and the kernel config files are attached on this email.
>>
>> Thanks in advance,
>> Giovani
>>
>>
>> _______________________________________________
>> litmus-dev mailing list
>> litmus-dev at lists.litmus-rt.org
>> https://lists.litmus-rt.org/listinfo/litmus-dev
>>
>>
>> _______________________________________________
>> litmus-dev mailing list
>> litmus-dev at lists.litmus-rt.org
>> https://lists.litmus-rt.org/listinfo/litmus-dev
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20120425/b13e12d7/attachment.html>
More information about the litmus-dev
mailing list