[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/amd: Avoid cpu_has_hypervisor evaluating true on native hardware


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 28 Jan 2020 12:58:19 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
  • Delivery-date: Tue, 28 Jan 2020 11:58:30 +0000
  • Ironport-sdr: wE8vZC+O3/c/5v4TWpuhTCtqnOSBolSL64lRfeG1mTXSWNbxQswq8a+E4o5qpUJUeMs3hNX5Ze NGTv7o3ZIMgZrHgFpbuSOHxcArJdnzhJP1NR1rIB3hhbNJHQ3bm7HChbkrvigfdbsYsSRzTMnY 7kJpiZgMTVkKuDO9c6RCx0KuojyK7kNuzB3EyvD+umItJTqEuYafN6+Q/1V16CjpJ9bBWEgqNW AWJDmAwVxFPHtWsqOhRmPxy5KlhosFEGOMIdALVsfq8DdhsK7Oaqt44g0TkVr0O2Tg6EsIQIQp fjY=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jan 28, 2020 at 11:21:14AM +0000, Andrew Cooper wrote:
> On 28/01/2020 10:39, Roger Pau Monné wrote:
> >
> >> This is one of two possible approaches, and both have their downsides.  
> >> This
> >> one takes an extra hit on context switches between PV vcpus and idle/hvm, 
> >> as
> >> they will usually differ in HYPERVISOR bit.
> >>
> >> The other approach is to order things more carefully so levelling is
> >> configured after scanning for cpuid bits, but that has the downside that 
> >> it is
> >> very easy to regress.

Would it be enough to populate boot_cpu_data before setting
cpuidmask_defaults?

So that at least the data in boot_cpu_data is not tainted with the
bits in cpuidmask_defaults?

I'm certainly not an expert on that area, so your judgment on the
least bad approach is likely more accurate than mine.

> >> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> >> index 5ed63ac10a..0627eb4e06 100644
> >> --- a/xen/arch/x86/domctl.c
> >> +++ b/xen/arch/x86/domctl.c
> >> @@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(domid_t domid, struct 
> >> xen_domctl_gdbsx_memio *iop)
> >>  }
> >>  #endif
> >>  
> >> -static void domain_cpu_policy_changed(struct domain *d)
> >> +void domain_cpu_policy_changed(struct domain *d)
> >>  {
> >>      const struct cpuid_policy *p = d->arch.cpuid;
> >>      struct vcpu *v;
> >> @@ -106,6 +106,13 @@ static void domain_cpu_policy_changed(struct domain 
> >> *d)
> >>                      ecx = 0;
> >>                  edx = cpufeat_mask(X86_FEATURE_APIC);
> >>  
> >> +                /*
> >> +                 * If the Hypervisor bit is set in the policy, we can also
> >> +                 * forward it into real CPUID.
> >> +                 */
> >> +                if ( p->basic.hypervisor )
> >> +                    ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR);
> > Since the hypervisor bit will be part of both the HVM and PV max
> > policies, why do you need to explicitly allow it here?
> >
> > Won't it be naturally added to the guest policy as the rest of
> > features?
> 
> cpuidmask_defaults serves as an and-mask over any content the guest
> policy selects.
>
> This is because, in the AMD case, these are actually override MSRs
> rather than mask MSRs.  Care has to be taken not to advertise any
> features the pipeline can't deliver on, but it is also the properly
> which allows us to advertise the HYPERVISOR bit in general.

Oh, so on AMD cpuidmask_defaults is not a mask, it can also force
features to appear on cpuid, even when not present on the hardware
cpuid.

Would it make sense to and cpuidmask_defaults with the real hardware
policy before writing it to the MSR in order to prevent things not
present on the hardware policy from appearing magically?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.