[LITMUS^RT] help with implementing energy-aware features in litmus-rt

Gabriel Lozano gabrilozano90 at gmail.com
Wed Jun 14 06:07:04 CEST 2017


Thanks for your prompt response Mr. Brandenburg,

I've managed to draft a quick prototype of the energy-aware policy I'm
trying to implement by using the available tracing infrastructure as
adviced. However, I've the feeling that code could definitely be improved,
so I would appreciate if someone had some suggestions for doing this.

The policy is quite simple: it is based on P-EDF; every psnedf_domain_t is
augmented with a "load" container (which is simply the utilization of all
tasks assigned to the processor), and I use the ops. included in the
fpmath.h header for this (representing non-integral values). The task
rt_param struct. is augmented with a similar field.

When a task is released, its load value is reset to its worst-case (i.e.
WCET / period), and the processor load is updated accordingly. When a task
completes, its load is set to AET / period, and the processor load is once
again updated. Assuming all processors run at the same frequency,
performing a freq. scaling is a matter of tracking which processor
currently has the max. load. and setting the (global) freq. to this load
value:

At job release:
   - reset each released task load to AET / period
   - update processor load
   - determine the max. load among all m cpus and do freq. scaling
(currently, just TRACE())

At job completion:
   - update the completed task load to WCET / period
   - update processor load
   - determine the max. load among all m cpus and do freq. scaling (just
TRACE())

I'm inserting this functionality in the release_jobs() callback used by
psnedf_domain_init(), and the job_completion() helper function called by
psnedf_schedule(). However, during task release, I'm visiting each node in
the release_heap for updating their load (as done in the p-fp plugin), and
are protecting the portion of code that gets the max. load among all m cpus
with a global lock (for avoiding concurrent updates), so additional
overheads are expected. Is this design worthwhile?

Follow-up: I've compiled LITMUS^RT with freq. scaling enabled, have changed
(offline) the freq. of all processors to the max. and min. values (during
different experiments), and have run a light test under these settings
using P-EDF. However, to my surprise, the produced schedules using the max.
and min. speeds are identical (tasks run times are pretty much the same),
whereas I was expecting a certain performance degradation of some sort. My
question is: what am I supposed to expect when changing the speed of the
processors at offline/online? Are there portions of code sensitive to speed
changes?

Could someone help me interpret this behavior I'm observing?

Again, thank you very much for your help, I really appreciate it!

On Wed, May 24, 2017 at 3:56 AM, Björn Brandenburg <bbb at mpi-sws.org> wrote:

>
> > On 23. May 2017, at 16:12, Gabriel Lozano <gabrilozano90 at gmail.com>
> wrote:
> >
> > hi there, i'm new to litmus-rt and i was hoping that someone here might
> be able to help me with implementing energy-aware features in litmus-rt
> >
> > some background: i've completed the tutorials in the litmus-rt website,
> and have read (i think) every published material that explains the inner
> workings of litmus-rt. i have a basic understanding of the plugin's source
> code and can manage with linux development.
> >
> > now, what i want is to implement scheduler plugins that featured the
> following:
> >  (i) frequency scaling (thru the cpufreq subsystem), and
> >  (ii) entering/exiting sleep states (thru the cpuidle subsystem)
> >
> > basically what i want is a scheduler that scaled the whole platform's
> (i.e. all cores') frequency to a precomputed frequency at an offline stage.
> further, during run-time, i'd like the scheduler to reclaim dynamic slacks
> (i.e. diff. between WCET and AET) in the schedule, using this spare
> capacity to select and enter an appropriate sleep state (maybe programming
> a timer that waked up the core at the end of the sleep period). i'm not
> interested in cpu-hotplug.
> >
> > i am aware that currently litmus-rt does not feature/support this
> functionality, although i'm not sure why (where does this conflict with
> litmus's mechanisms?). i haven't found any information in the archives
> regarding energy-saving features (other than asking if it's feasible). the
> "getting started with litmus-rt" slides only mention this very briefly
> (regarding frequency scaling it says: –plugins "work", but oblivious to
> speed changes–).
> >
> > i'd need to implement these features for a project, but am not sure
> where to start. it'd be wonderful if someone could help me with a
> high-level list of things to do to accomplish this (which litmus-rt
> subsystems to modify, where to be careful with what, i don't mean to take
> anyone's time, i just want to be pointed in the right direction), given
> that people in this list include the core maintainers (and creators) of
> litmus-rt.
> >
> > i'd really appreciate it if you helped me with this, 'cause i'm kind of
> running out of ideas here :-). thanks in advance.
>
> Dear Gabriel
>
> welcome and thanks for your interest in LITMUS^RT.
>
> I personally have not worked on power/energy/thermal-aware scheduling.
> This also the only reason why it’s not yet part of LITMUS^RT — the topic
> has not crossed the path of the core developers. There is no fundamental
> limitation that would speak against adding it.
>
> Regarding how to get started, I would suggest to pick one of the simpler
> plugins to work with (e.g., P-FP or PSN-EDF) and modify it to just TRACE()
> whenever it _would_ take a energy-savings decision. This allows you to
> build up all the necessary scheduling logic and scaffoliding without
> actually having to deal with cpufreq and cpuidle stuff yet. Depending on
> how involved your target policy is, this is likely not going to be entirely
> trivial.
>
> Once the logic seems to work (i.e., no crashes and the traces of “should
> do X” decisions look reasonable, then you can start interfacing with the
> rest of Linux in a second phase.
>
> Hope this helps to get things started. Feel free to ask anytime if you run
> into issues.
>
> Regards,
> Björn
>
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20170613/1b8f28a2/attachment.html>


More information about the litmus-dev mailing list