[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH for-4.15] xen/sched: Add missing memory barrier in vcpu_block()
Hi Julien, Thanks for looking at this, > vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the > CPU to read any information about local events before the flag > _VPF_blocked is set. Reviewed-by: Ash Wilding <ash.j.wilding@xxxxxxxxx> As an aside, > I couldn't convince myself whether the Arm implementation of > local_events_need_delivery() contains enough barrier to prevent the > re-ordering. However, I don't think we want to play with devil here > as the function may be optimized in the future. Agreed. The vgic_vcpu_pending_irq() and vgic_evtchn_irq_pending() in the call path of local_events_need_delivery() both call spin_lock_irqsave(), which has an arch_lock_acquire_barrier() in its call path. That just happens to map to a heavier smp_mb() on Arm right now, but relying on this behaviour would be shaky; I can imagine a future update to arch_lock_acquire_barrier() that relaxes it down to just acquire semantics like its name implies (for example an LSE-based lock_acquire() using LDUMAXA),in which case any code incorrectly relying on that full barrier behaviour may break. I'm guessing this is what you meant by the function may be optimized in future? Do we know whether there is an expectation for previous loads/stores to have been observed before local_events_need_delivery()? I'm wondering whether it would make sense to have an smb_mb() at the start of the *_nomask() variant in local_events_need_delivery()'s call path. Doing so would obviate the need for this particular patch, though would not obviate the need to identify and fix similar set_bit() patterns. Cheers, Ash.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |