[LITMUS^RT] help with implementing energy-aware features in litmus-rt

Björn Brandenburg bbb at mpi-sws.org
Wed Jun 14 08:37:19 CEST 2017


> On 14. Jun 2017, at 06:07, Gabriel Lozano <gabrilozano90 at gmail.com> wrote:
> 
> I'm inserting this functionality in the release_jobs() callback used by psnedf_domain_init(), and the job_completion() helper function called by psnedf_schedule(). However, during task release, I'm visiting each node in the release_heap for updating their load (as done in the p-fp plugin), and are protecting the portion of code that gets the max. load among all m cpus with a global lock (for avoiding concurrent updates), so additional overheads are expected. Is this design worthwhile?

For moderately sized task sets (i.e., anything not hundreds of tasks), that’s going to be fine overhead-wise.

However, it also means that your plugin will work only for (a) periodic tasks that (b) call sleep_next_period() in liblitmus. For example, run rtspin with the -T option, or trigger rtspin sporadically with the -S options, and see what happens. 

Whether this is an issue depends on your goals. If you are primarily interested in obtaining an academic prototype that is suitable for some controlled evaluation workloads that you know will satisfy (a) and (b), then this is probably be an acceptable limitation. If, however, you want your plugin to be useful with arbitrary Linux tasks, then this approach is not going work. 

For example, have a look at the P-RES plugin, which is designed to support real-world Linux tasks. It is completely reservation-based and does not use (the older) rt_domain_t code at all and instead treats all wakeups/job completions as regular suspensions, which is more in line with how the rest of Linux works. 

> Follow-up: I've compiled LITMUS^RT with freq. scaling enabled, have changed (offline) the freq. of all processors to the max. and min. values (during different experiments), and have run a light test under these settings using P-EDF. However, to my surprise, the produced schedules using the max. and min. speeds are identical (tasks run times are pretty much the same), whereas I was expecting a certain performance degradation of some sort. My question is: what am I supposed to expect when changing the speed of the processors at offline/online? Are there portions of code sensitive to speed changes?
> 
> Could someone help me interpret this behavior I'm observing?

Are you using rtspin? It spins based on how much time it has consumed, so changing processor frequency is not going to affect its runtime: if you make the processor faster, it will simply spin for more iterations. rtspin is primarily a debugging and prototyping tool, and not a high-fidelity workload simulator that reacts plausibly to changes in processor/memory/cache/whatever configuration. 

- Björn




More information about the litmus-dev mailing list