|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v2 12/15] x86/vmx: guard access to cpu_has_vmx_* in common code
On Wed, 15 May 2024, Sergiy Kibrik wrote:
> There're several places in common code, outside of arch/x86/hvm/vmx,
> where cpu_has_vmx_* get accessed without checking if VMX present first.
> We may want to guard these macros, as they read global variables defined
> inside vmx-specific files -- so VMX can be made optional later on.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> Here I've tried a different approach from prev.patches [1,2] -- instead of
> modifying whole set of cpu_has_{svm/vmx}_* macros, we can:
> 1) do not touch SVM part at all, because just as Andrew pointed out they're
> used inside arch/x86/hvm/svm only.
> 2) track several places in common code where cpu_has_vmx_* features are
> checked out and guard them using cpu_has_vmx condition
> 3) two of cpu_has_vmx_* macros being used in common code are checked in a bit
> more tricky way, so instead of making complex conditionals even more
> complicated,
> we can instead integrate cpu_has_vmx condition inside these two macros.
>
> This patch aims to replace [1,2] from v1 series by doing steps above.
>
> 1.
> https://lore.kernel.org/xen-devel/20240416064402.3469959-1-Sergiy_Kibrik@xxxxxxxx/
> 2.
> https://lore.kernel.org/xen-devel/20240416064606.3470052-1-Sergiy_Kibrik@xxxxxxxx/
I am missing some of the previous discussions but why can't we just fix
all of the cpu_has_vmx_* #defines in vmcs.h to also check for
cpu_has_vmx?
That seems easier and simpler than to add add-hoc checks at the invocations?
> ---
> changes in v2:
> - do not touch SVM code and macros
> - drop vmx_ctrl_has_feature()
> - guard cpu_has_vmx_* macros in common code instead
> changes in v1:
> - introduced helper routine vmx_ctrl_has_feature() and used it for all
> cpu_has_vmx_* macros
> ---
> xen/arch/x86/hvm/hvm.c | 2 +-
> xen/arch/x86/hvm/viridian/viridian.c | 4 ++--
> xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 4 ++--
> xen/arch/x86/traps.c | 5 +++--
> 4 files changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9594e0a5c5..ab75de9779 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5180,7 +5180,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
> {
> case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
> case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
> - if ( !cpu_has_monitor_trap_flag )
> + if ( !cpu_has_vmx || !cpu_has_monitor_trap_flag )
> return -EOPNOTSUPP;
> break;
> default:
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c
> b/xen/arch/x86/hvm/viridian/viridian.c
> index 0496c52ed5..657c6a3ea7 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -196,7 +196,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t
> leaf,
> res->a = CPUID4A_RELAX_TIMER_INT;
> if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
> res->a |= CPUID4A_HCALL_REMOTE_TLB_FLUSH;
> - if ( !cpu_has_vmx_apic_reg_virt )
> + if ( !cpu_has_vmx || !cpu_has_vmx_apic_reg_virt )
> res->a |= CPUID4A_MSR_BASED_APIC;
> if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
> res->a |= CPUID4A_SYNTHETIC_CLUSTER_IPI;
> @@ -236,7 +236,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t
> leaf,
>
> case 6:
> /* Detected and in use hardware features. */
> - if ( cpu_has_vmx_virtualize_apic_accesses )
> + if ( cpu_has_vmx && cpu_has_vmx_virtualize_apic_accesses )
> res->a |= CPUID6A_APIC_OVERLAY;
> if ( cpu_has_vmx_msr_bitmap || (read_efer() & EFER_SVME) )
> res->a |= CPUID6A_MSR_BITMAPS;
> diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> index 58140af691..aa05f9cf6e 100644
> --- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
> @@ -306,7 +306,7 @@ extern u64 vmx_ept_vpid_cap;
> #define cpu_has_vmx_vnmi \
> (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS)
> #define cpu_has_vmx_msr_bitmap \
> - (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
> + (cpu_has_vmx && vmx_cpu_based_exec_control &
> CPU_BASED_ACTIVATE_MSR_BITMAP)
> #define cpu_has_vmx_secondary_exec_control \
> (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
> #define cpu_has_vmx_tertiary_exec_control \
> @@ -347,7 +347,7 @@ extern u64 vmx_ept_vpid_cap;
> #define cpu_has_vmx_vmfunc \
> (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS)
> #define cpu_has_vmx_virt_exceptions \
> - (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
> + (cpu_has_vmx && vmx_secondary_exec_control &
> SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
> #define cpu_has_vmx_pml \
> (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML)
> #define cpu_has_vmx_mpx \
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 7b8ee45edf..3595bb379a 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1130,7 +1130,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v,
> uint32_t leaf,
> if ( !is_hvm_domain(d) || subleaf != 0 )
> break;
>
> - if ( cpu_has_vmx_apic_reg_virt )
> + if ( cpu_has_vmx && cpu_has_vmx_apic_reg_virt )
> res->a |= XEN_HVM_CPUID_APIC_ACCESS_VIRT;
>
> /*
> @@ -1139,7 +1139,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v,
> uint32_t leaf,
> * and wrmsr in the guest will run without VMEXITs (see
> * vmx_vlapic_msr_changed()).
> */
> - if ( cpu_has_vmx_virtualize_x2apic_mode &&
> + if ( cpu_has_vmx &&
> + cpu_has_vmx_virtualize_x2apic_mode &&
> cpu_has_vmx_apic_reg_virt &&
> cpu_has_vmx_virtual_intr_delivery )
> res->a |= XEN_HVM_CPUID_X2APIC_VIRT;
> --
> 2.25.1
>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |