[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] Nested VMX: prohabit virtual vmentry/vmexit during IO emulaiton



From: Yang Zhang <yang.z.zhang@xxxxxxxxx>

Sometimes, L0 need to decode the L2's instruction to handle IO access directly.
And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
there is a virtual vmexit pending (for example, an interrupt pending to inject
to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we already
in L1's context, but since we got a X86EMUL_RETRY just now and this means 
hyprevisor
will retry to handle the IO request later and unfortunately, the retry will 
happen
in L1's context. And it will cause the problem.
The fixing is that if there is a pending IO request, no virtual vmexit/vmentry
is allowed.

Signed-off-by: Yang Zhang <yang.z.zhang@xxxxxxxxx>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 41db52b..27119d5 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,8 +1394,16 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *p = get_ioreq(v);
 
     /*
+     * a pending IO emualtion may still no finished. In this case,
+     * no virtual vmswith is allowed. Or else, the following IO
+     * emulation will handled in a wrong VCPU context.
+     */
+    if ( p->state != STATE_IOREQ_NONE )
+        return;
+    /*
      * a softirq may interrupt us between a virtual vmentry is
      * just handled and the true vmentry. If during this window,
      * a L1 virtual interrupt causes another virtual vmexit, we
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.