[LITMUS^RT] Fwd: LITMUS^RT on v4.9: call for testing
Andrea Bastoni
bastoni at sprg.uniroma2.it
Wed Mar 8 10:16:38 CET 2017
I didn't hit reply-all, message below.
---------- Forwarded message ----------
From: Andrea Bastoni <bastoni at sprg.uniroma2.it>
Date: Wed, Mar 8, 2017 at 10:12 AM
Subject: Re: [LITMUS^RT] LITMUS^RT on v4.9: call for testing
To: Björn Brandenburg <bbb at mpi-sws.org>
Hi Bjoern,
On Wed, Mar 8, 2017 at 9:02 AM, Björn Brandenburg <bbb at mpi-sws.org> wrote:
>
>> On 6 Mar 2017, at 18:06, Namhoon Kim <namhoonk at cs.unc.edu> wrote:
>>
>> Thanks for the patch! The patch fixes the compilation error. I ran ./runtests with PFAIR, P-FP, LINUX, GSN-EDF, and PSN-EDF. All tests are passed except SRP tests. I will do more tests such as running randomly generated tasksets or launching linux tasks with rt_launch.
>>
>
> Hi Namhoon,
>
> I just had a look at Andrea’s compile fix. I think the use of atomic_add_return_relaxed() needs some more thought. It’s used to realize fetch_and_inc(), which is used in two places: once to get a sequence number (in litmus/trace.c), which I think is unproblematic, and once to mark a Feather-Buffer slot as free (in litmus/ftdev.c):
>
> static int ft_buffer_copy_to_user(struct ft_buffer* buf, char __user *dest)
> {
> unsigned int idx;
> int err = 0;
> if (buf->free_count != buf->slot_count) {
> /* data available */
> idx = buf->read_idx % buf->slot_count;
> if (buf->slots[idx] == SLOT_READY) {
> err = copy_to_user(dest, ((char*) buf->buffer_mem) +
> idx * buf->slot_size,
> buf->slot_size);
> if (err == 0) {
> /* copy ok */
> (1) -> buf->slots[idx] = SLOT_FREE;
> buf->read_idx++;
> (2) —> fetch_and_inc(&buf->free_count);
> err = 1;
> }
> }
> }
> return err;
> }
>
> Here, I think we probably need _release semantics (all prior stores visible) as otherwise we might make a buffer available to another core @ (2) *before* the write of SLOT_FREE to buf->slots[idx] @ (1) has taken effect. It’s a small race window (the buffer needs to be completely full), but it’s incorrect nonetheless.
I think my fix was too quick. So, yes, above you need a release
semantics (but that should be coupled with an acquire one somewhere,
perhaps where the free count is checked(?) -- why is the free_count
incremented atomically, but not read with at least READ_ONCE? -- but
I'm no longer familiar with the code, sorry.)
In the end, yes, there's the need of a barrier after the
fetch_and_inc(). Probably this _untested_ patch may do the trick:
diff --git a/include/litmus/feather_trace.h b/include/litmus/feather_trace.h
index dbeca46..24dbc16 100644
--- a/include/litmus/feather_trace.h
+++ b/include/litmus/feather_trace.h
@@ -1,7 +1,7 @@
#ifndef _FEATHER_TRACE_H_
#define _FEATHER_TRACE_H_
-#include <asm/atomic.h>
+#include <linux/atomic.h>
int ft_enable_event(unsigned long id);
int ft_disable_event(unsigned long id);
Thanks,
Andrea
> However, we also actually don’t need atomic_add_return(); we only need atomic_inc(). But on ARM, atomic_inc() is just atomic_add().
>
> http://lxr.free-electrons.com/source/arch/arm/include/asm/atomic.h?v=4.9#L257
>
> And ARM doesn’t seem to define atomic_add():
>
> http://lxr.free-electrons.com/ident?v=4.9;i=atomic_add
>
> So it falls back to the default definition, which is atomic_add_return(); exactly where we started.
>
> http://lxr.free-electrons.com/source/include/asm-generic/atomic.h?v=4.9#L196
>
> There are tons of users of atomic_inc() in kernel/ that compile just fine on ARM. So if all uses of atomic_inc() map to atomic_add_return() on ARM, why can’t we use atomic_add_return() in our code? What am I missing here?
>
> Bottomline: I don’t understand the problem, and if we really can’t keep using atomic_add_return() on ARM, I’m not convinced that just using atomic_add_return_relaxed() instead is right here.
>
> Namhoon, can you please investigate?
>
> Thanks,
> Björn
>
More information about the litmus-dev
mailing list