[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 2/3] x86/xen: disable premption when enabling local irqs



From: David Vrabel <david.vrabel@xxxxxxxxxx>

If CONFIG_PREEMPT is enabled then xen_enable_irq() (and
xen_restore_fl()) could be preempted and rescheduled on a different
VCPU in between the clear of the mask and the check for pending
events.  This may result in events being lost as the upcall will check
for pending events on the wrong VCPU.

Fix this by disabling preemption around the unmask and check for
events.

Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
---
 arch/x86/xen/irq.c |   25 ++++++++++++-------------
 1 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c
index 1a8d0d4..7a7a27d 100644
--- a/arch/x86/xen/irq.c
+++ b/arch/x86/xen/irq.c
@@ -47,23 +47,18 @@ static void xen_restore_fl(unsigned long flags)
        /* convert from IF type flag */
        flags = !(flags & X86_EFLAGS_IF);
 
-       /* There's a one instruction preempt window here.  We need to
-          make sure we're don't switch CPUs between getting the vcpu
-          pointer and updating the mask. */
+       /* See xen_irq_enable() for why preemption must be disabled. */
        preempt_disable();
        vcpu = this_cpu_read(xen_vcpu);
        vcpu->evtchn_upcall_mask = flags;
-       preempt_enable_no_resched();
-
-       /* Doesn't matter if we get preempted here, because any
-          pending event will get dealt with anyway. */
 
        if (flags == 0) {
-               preempt_check_resched();
                mb(); /* unmask then check (avoid races) */
                if (unlikely(vcpu->evtchn_upcall_pending))
                        xen_force_evtchn_callback();
-       }
+               preempt_enable();
+       } else
+               preempt_enable_no_resched();
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl);
 
@@ -82,10 +77,12 @@ static void xen_irq_enable(void)
 {
        struct vcpu_info *vcpu;
 
-       /* We don't need to worry about being preempted here, since
-          either a) interrupts are disabled, so no preemption, or b)
-          the caller is confused and is trying to re-enable interrupts
-          on an indeterminate processor. */
+       /*
+        * We may be preempted as soon as vcpu->evtchn_upcall_mask is
+        * cleared, so disable preemption to ensure we check for
+        * events on the VCPU we are still running on.
+        */
+       preempt_disable();
 
        vcpu = this_cpu_read(xen_vcpu);
        vcpu->evtchn_upcall_mask = 0;
@@ -96,6 +93,8 @@ static void xen_irq_enable(void)
        mb(); /* unmask then check (avoid races) */
        if (unlikely(vcpu->evtchn_upcall_pending))
                xen_force_evtchn_callback();
+
+       preempt_enable();
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable);
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.