[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [XEN PATCH v3 09/16] x86/traps: guard vmx specific functions with usinc_vmx macro
From: Xenia Ragiadakou <burzalodowa@xxxxxxxxx> Replace cpu_has_vmx check with using_vmx, so that not only VMX support in CPU gets checked, but also presence of functions vmx_vmcs_enter() & vmx_vmcs_exit() in the build checked as well. Also since CONFIG_VMX is checked in using_vmx and it depends on CONFIG_HVM, we can drop #ifdef CONFIG_HVM lines around using_vmx. Signed-off-by: Xenia Ragiadakou <burzalodowa@xxxxxxxxx> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx> --- changes in v3: - using_vmx instead of IS_ENABLED(CONFIG_VMX) - updated description --- xen/arch/x86/traps.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 9906e874d5..a81f3cf57c 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -676,7 +676,6 @@ void vcpu_show_execution_state(struct vcpu *v) vcpu_pause(v); /* acceptably dangerous */ -#ifdef CONFIG_HVM /* * For VMX special care is needed: Reading some of the register state will * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a @@ -684,12 +683,11 @@ void vcpu_show_execution_state(struct vcpu *v) * region. Despite this being a layering violation, engage the VMCS right * here. This then also avoids doing so several times in close succession. */ - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( using_vmx && is_hvm_vcpu(v) ) { ASSERT(!in_irq()); vmx_vmcs_enter(v); } -#endif /* Prevent interleaving of output. */ flags = console_lock_recursive_irqsave(); @@ -714,10 +712,8 @@ void vcpu_show_execution_state(struct vcpu *v) console_unlock_recursive_irqrestore(flags); } -#ifdef CONFIG_HVM - if ( cpu_has_vmx && is_hvm_vcpu(v) ) + if ( using_vmx && is_hvm_vcpu(v) ) vmx_vmcs_exit(v); -#endif vcpu_unpause(v); } -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |