[LITMUS^RT] New release: 2012.3

Felipe Cerqueira felipeqcerqueira at gmail.com
Wed Jan 16 18:58:32 CET 2013


No, no, actually the best solution is to put an ifdef for ARM inside Litmus
code and call cmpxchg64() instead of cmpxchg().
So we get better compatibility with the newest kernels.

For now we copy cmpxchg64() macro from 3.7.

2013/1/16 Felipe Cerqueira <felipeqcerqueira at gmail.com>

>
>
> 2013/1/16 Björn Brandenburg <bbb at mpi-sws.org>
>
>>
>> On Jan 16, 2013, at 5:31 PM, Glenn Elliott <gelliott at cs.unc.edu> wrote:
>>
>> >  Also, and probably the biggest issue going forward, testing on ARM is
>> difficult since ARM kernels are so fragmented.  We have to port Litmus to
>> each unique kernel devised for each of our ARM platforms.
>>
>> One more reason to rebase to a newer Linux version; they consolidated the
>> ARM support in recent versions.
>>
>
> Unfortunately 64-bit processors will only start to be produced in 2014
> (Cortex A50 series).
>
> Good news though...
> I found that Linux implements atomic64_cmpxchg() in
> arch/arm/include/asm/atomic.h, even in the 3.0 kernel. Probably for the
> older ARMv6k ARM boards.
> In the first v7 instruction set ARM removed support for LDREXD, and it
> only got back in v7-AR. The problem is that this is not treated in Linux
> headers.
>
> 3.7 kernel does a little better job by providing access to
> atomic64_cmpxchg() via another macro:
>
> #define cmpxchg64(ptr, o, n)                        \
>     ((__typeof__(*(ptr)))atomic64_cmpxchg(container_of((ptr),    \
>                         atomic64_t,        \
>                         counter),        \
>                           (unsigned long)(o),    \
>                           (unsigned long)(n)))
>
>
> /*
>  * ARMv6 UP and SMP safe atomic ops.  We use load exclusive and
>  * store exclusive to ensure that these are atomic.  We may loop
>  * to ensure that the update happens.
>  */
> [...]
>
> static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new)
> {
>     u64 oldval;
>     unsigned long res;
>
>     smp_mb();
>
>     do {
>         __asm__ __volatile__("@ atomic64_cmpxchg\n"
>         "ldrexd        %1, %H1, [%3]\n"
>         "mov        %0, #0\n"
>         "teq        %1, %4\n"
>         "teqeq        %H1, %H4\n"
>         "strexdeq    %0, %5, %H5, [%3]"
>         : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)
>         : "r" (&ptr->counter), "r" (old), "r" (new)
>         : "cc");
>     } while (res);
>
>     smp_mb();
>
>     return oldval;
> }
>
>
> So, we can simply edit the normal cmpxchg() macro, put a new ifdef
> CONFIG_ARMV7AR and add support to 64-bit.
> That way, we won`t need to modify Litmus code.
>
> Thanks,
> Felipe
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20130116/4ed541cf/attachment.html>


More information about the litmus-dev mailing list