[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/15, Raghavendra K T wrote: > > On 02/13/2015 09:02 PM, Oleg Nesterov wrote: > >>> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock >>> *lock, __ticket_t want) >>> * check again make sure it didn't become free while >>> * we weren't looking. >>> */ >>> - if (ACCESS_ONCE(lock->tickets.head) == want) { >>> + head = READ_ONCE(lock->tickets.head); >>> + if (__tickets_equal(head, want)) { >>> add_stats(TAKEN_SLOW_PICKUP, 1); >>> goto out; >> >> This is off-topic, but with or without this change perhaps it makes sense >> to add smp_mb__after_atomic(). It is nop on x86, just to make this code >> more understandable for those (for me ;) who can never remember even the >> x86 rules. > > Hope you meant it for add_stat. No, no. We need a barrier between set_bit(SLOWPATH) and tickets_equal(). Yes, on x86 set_bit() can't be reordered so smp_mb_*_atomic() is nop, but it can make the code more understandable. > yes smp_mb__after_atomic() would be > harmless barrier() in x86. Did not add this V5 as yoiu though but this > made me look at slowpath_enter code and added an explicit barrier() > there :). Well. it looks even more confusing than a lack of barrier ;) Oleg. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |