[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86 spinlock: Fix memory corruption on completing completions
On Tue, Feb 10, 2015 at 10:30 AM, Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx> wrote: > On 02/10/2015 06:23 AM, Linus Torvalds wrote: >> add_smp(&lock->tickets.head, TICKET_LOCK_INC); >> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) .. >> >> into something like >> >> val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC << >> TICKET_SHIFT); >> if (unlikely(val & TICKET_SLOWPATH_FLAG)) ... >> >> would be the right thing to do. Somebody should just check that I got >> that shift right, and that the tail is in the high bytes (head really >> needs to be high to work, if it's in the low byte(s) the xadd would >> overflow from head into tail which would be wrong). > > Unfortunately xadd could result in head overflow as tail is high. xadd can overflow, but is this really a problem? # define HEAD_MASK (TICKET_SLOWPATH_FLAG-1) ... unlock_again: val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC); if (unlikely(!(val & HEAD_MASK))) { /* overflow. we inadvertently incremented the tail word. * tail's lsb is TICKET_SLOWPATH_FLAG. * Increment inverted this bit, fix it up. * (inc _may_ have messed up tail counter too, * will deal with it after kick.) */ val ^= TICKET_SLOWPATH_FLAG; } if (unlikely(val & TICKET_SLOWPATH_FLAG)) { ...kick the waiting task... val -= TICKET_SLOWPATH_FLAG; if (unlikely(!(val & HEAD_MASK))) { /* overflow. we inadvertently incremented tail word, *and* * TICKET_SLOWPATH_FLAG was set, increment overflowed * that bit too and incremented tail counter. * This means we (inadvertently) taking the lock again! * Oh well. Take it, and unlock it again... */ while (1) { if (READ_ONCE(lock->tickets.head) != TICKET_TAIL(val)) cpu_relax(); } goto unlock_again; } Granted, this looks ugly. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |