[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13, Raghavendra K T wrote: > > @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t > *lock) > { > struct __raw_tickets tmp = READ_ONCE(lock->tickets); > > - return tmp.tail != tmp.head; > + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); > } Well, this can probably use __tickets_equal() too. But this is cosmetic. It seems that arch_spin_is_contended() should be fixed with this change, (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC can be true because of TICKET_SLOWPATH_FLAG in .head, even if it is actually unlocked. And the "(__ticket_t)" typecast looks unnecessary, it only adds more confusuin, but this is cosmetic too. > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock > *lock, __ticket_t want) > * check again make sure it didn't become free while > * we weren't looking. > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = READ_ONCE(lock->tickets.head); > + if (__tickets_equal(head, want)) { > add_stats(TAKEN_SLOW_PICKUP, 1); > goto out; This is off-topic, but with or without this change perhaps it makes sense to add smp_mb__after_atomic(). It is nop on x86, just to make this code more understandable for those (for me ;) who can never remember even the x86 rules. Oleg. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |