[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86 spinlock: Fix memory corruption on completing completions



On 02/11/2015 09:24 AM, Oleg Nesterov wrote:
> I agree, and I have to admit I am not sure I fully understand why
> unlock uses the locked add. Except we need a barrier to avoid the race
> with the enter_slowpath() users, of course. Perhaps this is the only
> reason?

Right now it needs to be a locked operation to prevent read-reordering.
x86 memory ordering rules state that all writes are seen in a globally
consistent order, and are globally ordered wrt reads *on the same
addresses*, but reads to different addresses can be reordered wrt to writes.

So, if the unlocking add were not a locked operation:

        __add(&lock->tickets.head, TICKET_LOCK_INC);            /* not locked */

        if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
            __ticket_unlock_slowpath(lock, prev);

Then the read of lock->tickets.tail can be reordered before the unlock,
which introduces a race:

        /* read reordered here */
        if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) /* false */
            /* ... */;

        /* other CPU sets SLOWPATH and blocks */

        __add(&lock->tickets.head, TICKET_LOCK_INC);            /* not locked */

        /* other CPU hung */

So it doesn't *have* to be a locked operation. This should also work:

        __add(&lock->tickets.head, TICKET_LOCK_INC);            /* not locked */

        lfence();                                               /* prevent read 
reordering */
        if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
            __ticket_unlock_slowpath(lock, prev);

but in practice a locked add is cheaper than an lfence (or at least was).

This *might* be OK, but I think it's on dubious ground:

        __add(&lock->tickets.head, TICKET_LOCK_INC);            /* not locked */

        /* read overlaps write, and so is ordered */
        if (unlikely(lock->head_tail & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT))
            __ticket_unlock_slowpath(lock, prev);

because I think Intel and AMD differed in interpretation about how
overlapping but different-sized reads & writes are ordered (or it simply
isn't architecturally defined).

If the slowpath flag is moved to head, then it would always have to be
locked anyway, because it needs to be atomic against other CPU's RMW
operations setting the flag.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.