[LITMUS^RT] question about rtspin -- Sisu

Björn Brandenburg bbb at mpi-sws.org
Thu Jul 3 10:48:54 CEST 2014


On 03 Jul 2014, at 08:08, Sisu Xi <xisisu at gmail.com> wrote:
> 
> Previously (about 1 year ago), when I run litmus^RT in a VM and test with rtspin. the workload would actually scale according to the CPU resources allocated to the VM.
> For example, if set the cap of a VM to 50%, and run a rtspin with 50% utilization, it becomes 25%.

This is certainly not intended or supported behavior.

> 
> But right now I updated litmus to the current version and that problem seems gone. Has anyone else experienced the same problem?

No, I haven’t heard of this before.

> Just want to send this email to confirm with you, does rtspin works fine in a virtualized environment as well?

We don’t run LITMUS^RT in a VM environment, so I don’t know for sure. There are no fundamental reasons as to why it should not work.

rtspin uses the kernel’s notion of “execution time” to determine when to cease spinning, i.e., each job spins until the kernel claims that the job has consumed a certain amount of CPU time. The code of rtspin itself has not changed, so if you are seeing different behavior, then this indicates that the kernel has a different notion of “time spent executing” in a VM in newer kernel versions. This is not something that we changed, but may be due to the newer base kernel version. You would have to dig into how the kernel’s timekeeping implementation works when it is virtualized.

> Also, Is it recommended to use rtspin for synthetic workload experiment? 

No. rtspin is a debugging helper.

Regards,
Björn





More information about the litmus-dev mailing list