<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>On Jan 16, 2013, at 12:52 PM, Felipe Cerqueira <<a href="mailto:felipeqcerqueira@gmail.com">felipeqcerqueira@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><br><br><div class="gmail_quote">2013/1/16 Björn Brandenburg <span dir="ltr"><<a href="mailto:bbb@mpi-sws.org" target="_blank">bbb@mpi-sws.org</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
On Jan 16, 2013, at 5:31 PM, Glenn Elliott <<a href="mailto:gelliott@cs.unc.edu">gelliott@cs.unc.edu</a>> wrote:<br>
<br>
> Also, and probably the biggest issue going forward, testing on ARM is difficult since ARM kernels are so fragmented. We have to port Litmus to each unique kernel devised for each of our ARM platforms.<br>
<br>
</div>One more reason to rebase to a newer Linux version; they consolidated the ARM support in recent versions.<br></blockquote><div><br>Unfortunately 64-bit processors will only start to be produced in 2014 (Cortex A50 series). <br>
<br>Good news though...<br>I found that Linux implements atomic64_cmpxchg() in arch/arm/include/asm/atomic.h, even in the 3.0 kernel. Probably for the older ARMv6k ARM boards.<br>In the first v7 instruction set ARM removed support for LDREXD, and it only got back in v7-AR. The problem is that this is not treated in Linux headers.<br>
<br>3.7 kernel does a little better job by providing access to atomic64_cmpxchg() via another macro:<br><br>#define cmpxchg64(ptr, o, n) \<br> ((__typeof__(*(ptr)))atomic64_cmpxchg(container_of((ptr), \<br>
atomic64_t, \<br> counter), \<br> (unsigned long)(o), \<br> (unsigned long)(n)))<br><br><br>/*<br> * ARMv6 UP and SMP safe atomic ops. We use load exclusive and<br>
* store exclusive to ensure that these are atomic. We may loop<br> * to ensure that the update happens.<br> */<br>[...]<br><br>static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new)<br>{<br> u64 oldval;<br>
unsigned long res;<br><br> smp_mb();<br><br> do {<br> __asm__ __volatile__("@ atomic64_cmpxchg\n"<br> "ldrexd %1, %H1, [%3]\n"<br> "mov %0, #0\n"<br>
"teq %1, %4\n"<br> "teqeq %H1, %H4\n"<br> "strexdeq %0, %5, %H5, [%3]"<br> : "=&r" (res), "=&r" (oldval), "+Qo" (ptr->counter)<br>
: "r" (&ptr->counter), "r" (old), "r" (new)<br> : "cc");<br> } while (res);<br><br> smp_mb();<br><br> return oldval;<br>}<br><br><br>So, we can simply edit the normal cmpxchg() macro, put a new ifdef CONFIG_ARMV7AR and add support to 64-bit.<br>
That way, we won`t need to modify Litmus code.<br><br>Thanks,<br>Felipe</div></div></div></blockquote><br><div>Echoing Andrea, what is the motivation for 64-bit in Litmus for this case?</div></body></html>