[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 1 of 4] mm: add a ptep_modify_prot transaction abstraction



* Ingo Molnar <mingo@xxxxxxx> wrote:

> 
> * Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > On Thu, 19 Jun 2008, Ingo Molnar wrote:
> > > 
> > > Below is the commit, it needed a small amount of massaging to apply the 
> > > void * -> unsigned long * change in the x86/bitops topic.
> > 
> > Well, that's your bug right there.
> > 
> > The macros very much depended on the pointers being "void *", due to 
> > the pointer arithmetic (which is a gcc extension that we use 
> > extensively - "void *" arithmetic works as if it was a byte 
> > pointer).
> 
> duh, yeah - of course. Will retry with that fixed :)

yep, the patch below got it all going and it passed 5 hours of testing 
already. Thanks,

        Ingo

------------------->
commit 7dbceaf9bb68919651901b101f44edd5391ee489
Author: Ingo Molnar <mingo@xxxxxxx>
Date:   Fri Jun 20 07:28:24 2008 +0200

    x86, bitops: make constant-bit set/clear_bit ops faster, adapt, clean up
    
    fix integration bug introduced by "x86: bitops take an unsigned long *"
    which turned "(void *) + x" into "(long *) + x".
    
    small cleanups to make it more apparent which value get propagated where.
    
    Signed-off-by: Ingo Molnar <mingo@xxxxxxx>

diff --git a/include/asm-x86/bitops.h b/include/asm-x86/bitops.h
index ab7635a..6c50548 100644
--- a/include/asm-x86/bitops.h
+++ b/include/asm-x86/bitops.h
@@ -28,16 +28,15 @@
 #define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
 #endif
 
-#define ADDR BITOP_ADDR(addr)
+#define ADDR                           BITOP_ADDR(addr)
 
 /*
  * We do the locked ops that don't return the old value as
  * a mask operation on a byte.
  */
-#define IS_IMMEDIATE(nr) \
-       (__builtin_constant_p(nr))
-#define CONST_MASK_ADDR BITOP_ADDR(addr + (nr>>3))
-#define CONST_MASK (1 << (nr & 7))
+#define IS_IMMEDIATE(nr)               (__builtin_constant_p(nr))
+#define CONST_MASK_ADDR(nr, addr)      BITOP_ADDR((void *)(addr) + ((nr)>>3))
+#define CONST_MASK(nr)                 (1 << ((nr) & 7))
 
 /**
  * set_bit - Atomically set a bit in memory
@@ -56,13 +55,17 @@
  */
 static inline void set_bit(unsigned int nr, volatile unsigned long *addr)
 {
-       if (IS_IMMEDIATE(nr))
-               asm volatile(LOCK_PREFIX "orb %1,%0" : CONST_MASK_ADDR : "i" 
(CONST_MASK) : "memory");
-       else
-               asm volatile(LOCK_PREFIX "bts %1,%0" : ADDR : "Ir" (nr) : 
"memory");
+       if (IS_IMMEDIATE(nr)) {
+               asm volatile(LOCK_PREFIX "orb %1,%0"
+                       : CONST_MASK_ADDR(nr, addr)
+                       : "i" (CONST_MASK(nr))
+                       : "memory");
+       } else {
+               asm volatile(LOCK_PREFIX "bts %1,%0"
+                       : BITOP_ADDR(addr) : "Ir" (nr) : "memory");
+       }
 }
 
-
 /**
  * __set_bit - Set a bit in memory
  * @nr: the bit to set
@@ -89,10 +92,15 @@ static inline void __set_bit(int nr, volatile unsigned long 
*addr)
  */
 static inline void clear_bit(int nr, volatile unsigned long *addr)
 {
-       if (IS_IMMEDIATE(nr))
-               asm volatile(LOCK_PREFIX "andb %1,%0" : CONST_MASK_ADDR : "i" 
(~CONST_MASK));
-       else
-               asm volatile(LOCK_PREFIX "btr %1,%0" : ADDR : "Ir" (nr));
+       if (IS_IMMEDIATE(nr)) {
+               asm volatile(LOCK_PREFIX "andb %1,%0"
+                       : CONST_MASK_ADDR(nr, addr)
+                       : "i" (~CONST_MASK(nr)));
+       } else {
+               asm volatile(LOCK_PREFIX "btr %1,%0"
+                       : BITOP_ADDR(addr)
+                       : "Ir" (nr));
+       }
 }
 
 /*

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.