[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.8] VMX: fix VMCS race on context-switch paths



commit 437a8e63adb3b2f819dd11557e65d9cda331c9b1
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Mon Feb 20 15:58:02 2017 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Mon Feb 20 15:58:02 2017 +0100

    VMX: fix VMCS race on context-switch paths
    
    When __context_switch() is being bypassed during original context
    switch handling, the vCPU "owning" the VMCS partially loses control of
    it: It will appear non-running to remote CPUs, and hence their attempt
    to pause the owning vCPU will have no effect on it (as it already
    looks to be paused). At the same time the "owning" CPU will re-enable
    interrupts eventually (the lastest when entering the idle loop) and
    hence becomes subject to IPIs from other CPUs requesting access to the
    VMCS. As a result, when __context_switch() finally gets run, the CPU
    may no longer have the VMCS loaded, and hence any accesses to it would
    fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from().
    
    For consistency use the new function also in vmx_do_resume(), to
    avoid leaving an open-coded incarnation of it around.
    
    Reported-by: Kevin Mayer <Kevin.Mayer@xxxxxxxx>
    Reported-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx>
    Reviewed-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
    Tested-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
    master commit: 2f4d2198a9b3ba94c959330b5c94fe95917c364c
    master date: 2017-02-17 15:49:56 +0100
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 19 +++++++++++++++----
 xen/arch/x86/hvm/vmx/vmx.c         | 12 ++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
 3 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0995496..7879556 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -552,6 +552,20 @@ static void vmx_load_vmcs(struct vcpu *v)
     local_irq_restore(flags);
 }
 
+void vmx_vmcs_reload(struct vcpu *v)
+{
+    /*
+     * As we may be running with interrupts disabled, we can't acquire
+     * v->arch.hvm_vmx.vmcs_lock here. However, with interrupts disabled
+     * the VMCS can't be taken away from us anymore if we still own it.
+     */
+    ASSERT(v->is_running || !local_irq_is_enabled());
+    if ( v->arch.hvm_vmx.vmcs_pa == this_cpu(current_vmcs) )
+        return;
+
+    vmx_load_vmcs(v);
+}
+
 int vmx_cpu_up_prepare(unsigned int cpu)
 {
     /*
@@ -1652,10 +1666,7 @@ void vmx_do_resume(struct vcpu *v)
     bool_t debug_state;
 
     if ( v->arch.hvm_vmx.active_cpu == smp_processor_id() )
-    {
-        if ( v->arch.hvm_vmx.vmcs_pa != this_cpu(current_vmcs) )
-            vmx_load_vmcs(v);
-    }
+        vmx_vmcs_reload(v);
     else
     {
         /*
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7b2c50c..8949ad6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -896,6 +896,18 @@ static void vmx_ctxt_switch_from(struct vcpu *v)
     if ( unlikely(!this_cpu(vmxon)) )
         return;
 
+    if ( !v->is_running )
+    {
+        /*
+         * When this vCPU isn't marked as running anymore, a remote pCPU's
+         * attempt to pause us (from vmx_vmcs_enter()) won't have a reason
+         * to spin in vcpu_sleep_sync(), and hence that pCPU might have taken
+         * away the VMCS from us. As we're running with interrupts disabled,
+         * we also can't call vmx_vmcs_enter().
+         */
+        vmx_vmcs_reload(v);
+    }
+
     vmx_fpu_leave(v);
     vmx_save_guest_msrs(v);
     vmx_restore_host_msrs();
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h 
b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 997f4f5..0dfd5f8 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -238,6 +238,7 @@ void vmx_destroy_vmcs(struct vcpu *v);
 void vmx_vmcs_enter(struct vcpu *v);
 bool_t __must_check vmx_vmcs_try_enter(struct vcpu *v);
 void vmx_vmcs_exit(struct vcpu *v);
+void vmx_vmcs_reload(struct vcpu *v);
 
 #define CPU_BASED_VIRTUAL_INTR_PENDING        0x00000004
 #define CPU_BASED_USE_TSC_OFFSETING           0x00000008
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.8

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.