[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/2] x86/Intel: virtualize support for cpuid faulting



> From: Kyle Huey [mailto:me@xxxxxxxxxxxx]
> Sent: Tuesday, October 18, 2016 2:51 AM
> 
> On HVM guests, the cpuid triggers a vm exit, so we can check the emulated
> faulting state in vmx_do_cpuid and hvmemul_cpuid. A new function,
> hvm_check_cpuid_fault will check if cpuid faulting is enabled and the CPL > 0.
> When it returns true, the cpuid handling functions will inject a GP(0). 
> Notably
> no hardware support for faulting on cpuid is necessary to emulate support with
> an HVM guest.

last sentence is confusing. Isn't "the cpuid triggers a vm exit" sort of
hardware support?

> 
> On PV guests, hardware support is required so that userspace cpuid will trap
> to Xen. Xen already enables cpuid faulting on supported CPUs for pv guests 
> (that
> aren't the control domain, see the comment in intel_ctxt_switch_levelling).
> Every PV guest cpuid will trap via a GP(0) to emulate_privileged_op (via
> do_general_protection). Once there we simply decline to emulate cpuid if the
> CPL > 0 and faulting is enabled, leaving the GP(0) for the guest kernel to
> handle.
> 
> Signed-off-by: Kyle Huey <khuey@xxxxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/emulate.c    | 19 +++++++++++++++++++
>  xen/arch/x86/hvm/hvm.c        | 14 ++++++++++++++
>  xen/arch/x86/hvm/vmx/vmx.c    | 20 ++++++++++++++++++--
>  xen/arch/x86/traps.c          | 34
> ++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/domain.h  |  3 +++
>  xen/include/asm-x86/hvm/hvm.h |  1 +
>  6 files changed, 89 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 6ed7486..a713ff3 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1544,16 +1544,35 @@ static int hvmemul_wbinvd(
> 
>  static int hvmemul_cpuid(
>      unsigned int *eax,
>      unsigned int *ebx,
>      unsigned int *ecx,
>      unsigned int *edx,
>      struct x86_emulate_ctxt *ctxt)
>  {
> +    /*
> +     * x86_emulate uses this function to query CPU features for its own 
> internal
> +     * use. Make sure we're actually emulating CPUID before emulating CPUID
> +     * faulting.
> +     */
> +    if ( ctxt->opcode == X86EMUL_OPC(0x0f, 0xa2) &&
> +         hvm_check_cpuid_fault(current) ) {
> +        struct hvm_emulate_ctxt *hvmemul_ctxt =
> +            container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
> +
> +        hvmemul_ctxt->exn_pending = 1;
> +        hvmemul_ctxt->trap.vector = TRAP_gp_fault;
> +        hvmemul_ctxt->trap.type = X86_EVENTTYPE_HW_EXCEPTION;
> +        hvmemul_ctxt->trap.error_code = 0;
> +        hvmemul_ctxt->trap.insn_len = 0;
> +
> +        return X86EMUL_EXCEPTION;

I'm unclear about this change. So once guest enables CPUID faulting,
emulation of other instructions which require internal cpuid query in
x86_emulate will lead to a GP(0) injected to guest... Is this behavior
change expected? Sorry if I overlooked something here.

> +    }
> +
>      hvm_funcs.cpuid_intercept(eax, ebx, ecx, edx);
>      return X86EMUL_OKAY;
>  }
> 
>  static int hvmemul_inject_hw_exception(
>      uint8_t vector,
>      int32_t error_code,
>      struct x86_emulate_ctxt *ctxt)
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 3c90ecd..cf81e64 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3675,16 +3675,30 @@ void hvm_cpuid(unsigned int input, unsigned int *eax,
> unsigned int *ebx,
>          hvm_cpuid(0x80000001, NULL, NULL, NULL, &_edx);
>          *eax |= (_edx & cpufeat_mask(X86_FEATURE_LM) ? vaddr_bits : 32) << 8;
> 
>          *ebx &= hvm_featureset[FEATURESET_e8b];
>          break;
>      }
>  }
> 
> +bool hvm_check_cpuid_fault(struct vcpu *v)

to align with SDM, better to use full "cpuid_faulting". same for other
places.

> +{
> +    struct segment_register sreg;
> +
> +    if ( !v->arch.cpuid_fault )
> +        return false;
> +
> +    hvm_get_segment_register(v, x86_seg_ss, &sreg);
> +    if ( sreg.attr.fields.dpl == 0 )
> +        return false;
> +
> +    return true;
> +}
> +
>  static uint64_t _hvm_rdtsc_intercept(void)
>  {
>      struct vcpu *curr = current;
>  #if !defined(NDEBUG) || defined(CONFIG_PERF_COUNTERS)
>      struct domain *currd = curr->domain;
> 
>      if ( currd->arch.vtsc )
>          switch ( hvm_guest_x86_mode(curr) )
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b9102ce..228c1b9 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2428,16 +2428,21 @@ static void vmx_cpuid_intercept(
>      HVMTRACE_5D (CPUID, input, *eax, *ebx, *ecx, *edx);
>  }
> 
>  static int vmx_do_cpuid(struct cpu_user_regs *regs)
>  {
>      unsigned int eax, ebx, ecx, edx;
>      unsigned int leaf, subleaf;
> 
> +    if ( hvm_check_cpuid_fault(current) ) {
> +        hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +        return 1;  /* Don't advance the guest IP! */
> +    }
> +
>      eax = regs->eax;
>      ebx = regs->ebx;
>      ecx = regs->ecx;
>      edx = regs->edx;
> 
>      leaf = regs->eax;
>      subleaf = regs->ecx;
> 
> @@ -2694,19 +2699,23 @@ static int vmx_msr_read_intercept(unsigned int msr, 
> uint64_t
> *msr_content)
>      case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>      case MSR_IA32_PEBS_ENABLE:
>      case MSR_IA32_DS_AREA:
>          if ( vpmu_do_rdmsr(msr, msr_content) )
>              goto gp_fault;
>          break;
> 
>      case MSR_INTEL_PLATFORM_INFO:
> -        if ( rdmsr_safe(MSR_INTEL_PLATFORM_INFO, *msr_content) )
> -            goto gp_fault;
> +        *msr_content = MSR_PLATFORM_INFO_CPUID_FAULTING;
> +        break;
> +

So you plan to always export virtual cpuid faulting regardless of whether
physical capability exists... not a technical problem since CPUID-induced
VM-exit is a superset, but want to confirm the intention here. :-)

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.