[LITMUS^RT] help with implementing energy-aware features in litmus-rt
Gabriel Lozano
gabrilozano90 at gmail.com
Fri Jun 16 07:04:17 CEST 2017
I see... well, considering this is just an academic evaluation I guess some
degree of idealization is allowed. The objective is to come up with
practical realizations of a few power-aware sched. algorithms (which
usually are based on "classic" multiprocessor sched. algorithms, e.g.
P-EDF, G-EDF) and assess their performance trends in terms of power
consumption. Plus, at this stage it is intended to only provide support for
simple synchronous, periodic tasks, so this is not much of a problem =)
About rtspin, I had not seen its source code and so I didn't know how it
really operates, sorry about that.
I'm concerned with the following: in the real-time scheduling literature,
lowering the processor frequency results in a prolonged execution (i.e.
jobs' runtimes are inversely proportional to frequency). I think rtspin
should be ok for simulating such increase when the scheduling policy lowers
the processor frequency offline (a single speed modification before release
and that's it) by passing the scaled wcet parameter to rtspin (thus making
the rt task run longer). But how to properly enforce, or at least mimic
this behavior online I'm not sure...
The processor frequency could change at any point in the schedule, so the
runtime of every job would have to stretch/shrink accordingly. I've quickly
gone through a paper by Dr. Erickson form UNC in which the concept of
"virtual time" is employed. However, an interface between user/kernel-space
had to be used for controlling speed changes. Another thought was to use
the recently added task_param_change callback for applying changes during
scheduling events, but I'm not so sure if this would be useful for making
changes at the job level. Lastly, I considered writing custom real-time
tasks that performed some computation (instead of just spinning) using
liblitmus such that freq. changes actually modified their runtimes. The
problem I see with this is that I wouldn't have much control over the task
exec. time as provided by rtspin. Does anyone with more expertise than me
has any suggestions? Possibly some quick hack or workaround that didn't
involve considerable changes to the LITMUS^RT core?
Sorry if this is getting too long, I just have one last question: may I ask
what's the standard way of benchmarking plugins in LITMUS^RT? I was under
the impression that synthetic sets of real-time tasks (based on rtspin)
were generated and scheduled, traces analyzed in a posterior step, but
given your comments on the rtspin program now I'm not so sure.. For
example, looking at the evaluation procedure provided for the apa
scheduler, I see that it basically consists in launching several rtspin
tasks and collecting scheduling traces. The python script provided for
running tests on plugins in the github repo does a similar job.
Thank you so much for your help. Regards!!
On Wed, Jun 14, 2017 at 1:37 AM, Björn Brandenburg <bbb at mpi-sws.org> wrote:
>
> > On 14. Jun 2017, at 06:07, Gabriel Lozano <gabrilozano90 at gmail.com>
> wrote:
> >
> > I'm inserting this functionality in the release_jobs() callback used by
> psnedf_domain_init(), and the job_completion() helper function called by
> psnedf_schedule(). However, during task release, I'm visiting each node in
> the release_heap for updating their load (as done in the p-fp plugin), and
> are protecting the portion of code that gets the max. load among all m cpus
> with a global lock (for avoiding concurrent updates), so additional
> overheads are expected. Is this design worthwhile?
>
> For moderately sized task sets (i.e., anything not hundreds of tasks),
> that’s going to be fine overhead-wise.
>
> However, it also means that your plugin will work only for (a) periodic
> tasks that (b) call sleep_next_period() in liblitmus. For example, run
> rtspin with the -T option, or trigger rtspin sporadically with the -S
> options, and see what happens.
>
> Whether this is an issue depends on your goals. If you are primarily
> interested in obtaining an academic prototype that is suitable for some
> controlled evaluation workloads that you know will satisfy (a) and (b),
> then this is probably be an acceptable limitation. If, however, you want
> your plugin to be useful with arbitrary Linux tasks, then this approach is
> not going work.
>
> For example, have a look at the P-RES plugin, which is designed to support
> real-world Linux tasks. It is completely reservation-based and does not use
> (the older) rt_domain_t code at all and instead treats all wakeups/job
> completions as regular suspensions, which is more in line with how the rest
> of Linux works.
>
> > Follow-up: I've compiled LITMUS^RT with freq. scaling enabled, have
> changed (offline) the freq. of all processors to the max. and min. values
> (during different experiments), and have run a light test under these
> settings using P-EDF. However, to my surprise, the produced schedules using
> the max. and min. speeds are identical (tasks run times are pretty much the
> same), whereas I was expecting a certain performance degradation of some
> sort. My question is: what am I supposed to expect when changing the speed
> of the processors at offline/online? Are there portions of code sensitive
> to speed changes?
> >
> > Could someone help me interpret this behavior I'm observing?
>
> Are you using rtspin? It spins based on how much time it has consumed, so
> changing processor frequency is not going to affect its runtime: if you
> make the processor faster, it will simply spin for more iterations. rtspin
> is primarily a debugging and prototyping tool, and not a high-fidelity
> workload simulator that reacts plausibly to changes in
> processor/memory/cache/whatever configuration.
>
> - Björn
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20170616/4ce96b43/attachment.html>
More information about the litmus-dev
mailing list