|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv3 3/4] xen: use ticket locks for spin locks
At 14:43 +0100 on 23 Apr (1429800229), David Vrabel wrote:
> On 23/04/15 13:03, Tim Deegan wrote:
> > Hi,
> >
> > At 11:11 +0100 on 21 Apr (1429614687), David Vrabel wrote:
> >> void _spin_lock(spinlock_t *lock)
> >> {
> >> + spinlock_tickets_t tickets = { .tail = 1, };
> >> LOCK_PROFILE_VAR;
> >>
> >> check_lock(&lock->debug);
> >> - while ( unlikely(!_raw_spin_trylock(&lock->raw)) )
> >> + tickets.head_tail = xadd(&lock->tickets.head_tail, tickets.head_tail);
> >> + while ( tickets.tail != observe_head(&lock->tickets) )
> >> {
> >> LOCK_PROFILE_BLOCK;
> >> - while ( likely(_raw_spin_is_locked(&lock->raw)) )
> >> - cpu_relax();
> >> + cpu_relax();
> >> }
> >> LOCK_PROFILE_GOT;
> >> preempt_disable();
> >
> > I think you need an smp_mb() here to make sure that locked accesses
> > don't get hoisted past the wait-for-my-ticket loop by an out-of-order
> > (ARM) cpu.
>
> Ok, but smp_mb() turns into an mfence on x86. Is this a
> problem/sub-optimal?
So, having chased this around my head for a while, I think you're
right. Expanding this code a bit, I think the important ops are:
(in observe_head())
smp_rmb() (== barrier())
[ POINT A ]
read lock and see that we have acquired it
(in preempt_disable())
increment disable count (this is both a read and a write)
barrier();
[ POINT B ]
A read after point B can't see unlocked state because of the second
compiler barrier and the fact that x86 won't reorder it past the read
in observe_head().
A write after point B can't be observed before we have the lock
because
1) the second barrier stops the compiler reorderign before the increment;
2) x86 won't make it visible before the write half of the increment;
3) the write half of the increment can't happen before the read half; and
4) the read half can't happen before the read in observe_head().
I'm not 100% sure about (3), as it happens; maybe either the compiler
or the CPU might do something unexpected there?
So probably, on x86, we don't need the mfence. A bit fragile,
though, relying on the internals of preempt_disable().
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |