[LITMUS^RT] - rtspin and shared resource

Björn Brandenburg bbb at mpi-sws.org
Sat Dec 28 15:26:49 CET 2013


On 20.12.2013, at 12:30, Sebastiano Catellani <zebganzo at gmail.com> wrote:
> I'm working at a P-FP scheduler that allows resource sharing across different cpus.
> 
> As first baseline I adopted DPCP but I'm not able to run a task set that shares a resource.
> 
> I'm using hermanjl's experiment-scripts and my sched.py file looks like the one below:
> 
> -p 1 -z 1 -q 3 -X DPCP -L 1 -Q 1 3 20
> -p 2 -z 1 -q 3 -X DPCP -L 1 -Q 1 3 20
> 
> Two tasks, on different cpus, try to access the same resource.
> 
> This is the error I get when I execute the run_exps.py script
> 
> Non-zero return 1: /var/nfs/liblitmus/rtspin -w -p 2 -z 1 -q 3 -X DPCP -L 3 -Q 1 3 20 15
> 
> In the exec-err.txt file I found this message:
> 
> litmus_open_lock: Invalid argument
> Error: Could not open lock.
> 
> If I'm not mistaken, both tasks are trying to instantiate a resource with the same ID instead of the usual behavior, i.e. the first task creates the resource and the second one only retrieves a reference to it.

Hi Sebastiano,

the DPCP differs from other locking protocols in that it is a “distributed semaphore protocol”, i.e., each resource is statically allocated to a synchronization processor. This synchronization processor must be specified as  litmus_open_lock()’s configuration parameter. All clients must agree on the synchronization processor. The error you are seeing stems from the fact that rtspin passes each task’s assigned partition as the configuration parameter. 

In other words, rtspin simply lacks support for distributed locking protocols like the DPCP in the current implementation. Shared-memory protocols (like the MPCP) and local protocols (like the PCP) work.

You could patch rtspin to take another parameter to explicitly set the synchronization processor for distributed locking protocols. I’d be happy to merge such a patch.

Thanks,
Björn





More information about the litmus-dev mailing list