[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/6] x86/hvm: Handle viridian MSRs via the new guest_{rd, wr}msr() infrastructure



> -----Original Message-----
> From: Andrew Cooper
> Sent: 28 February 2018 18:22
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Xen-devel <xen-
> devel@xxxxxxxxxxxxx>
> Cc: Jan Beulich <JBeulich@xxxxxxxx>; Jun Nakajima
> <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; Boris
> Ostrovsky <boris.ostrovsky@xxxxxxxxxx>; Suravee Suthikulpanit
> <suravee.suthikulpanit@xxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Roger
> Pau Monne <roger.pau@xxxxxxxxxx>; Sergey Dyasli
> <sergey.dyasli@xxxxxxxxxx>
> Subject: Re: [PATCH 2/6] x86/hvm: Handle viridian MSRs via the new
> guest_{rd,wr}msr() infrastructure
> 
> On 27/02/18 14:38, Paul Durrant wrote:
> >> @@ -698,13 +697,11 @@ void viridian_time_ref_count_thaw(struct
> domain
> >> *d)
> >>          trc->off = (int64_t)trc->val - raw_trc_val(d);
> >>  }
> >>
> >> -int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >> +int guest_rdmsr_viridian(const struct vcpu *v, uint32_t idx, uint64_t
> *val)
> >>  {
> >> -    struct vcpu *v = current;
> >>      struct domain *d = v->domain;
> >> -
> >> -    if ( !is_viridian_domain(d) )
> >> -        return 0;
> >> +
> >> +    ASSERT(is_viridian_domain(d));
> >>
> >>      switch ( idx )
> >>      {
> >> @@ -725,7 +722,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >>
> >>      case HV_X64_MSR_TSC_FREQUENCY:
> >>          if ( viridian_feature_mask(d) & HVMPV_no_freq )
> >> -            return 0;
> >> +            goto gp_fault;
> >>
> >>          perfc_incr(mshv_rdmsr_tsc_frequency);
> >>          *val = (uint64_t)d->arch.tsc_khz * 1000ull;
> >> @@ -733,7 +730,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >>
> >>      case HV_X64_MSR_APIC_FREQUENCY:
> >>          if ( viridian_feature_mask(d) & HVMPV_no_freq )
> >> -            return 0;
> >> +            goto gp_fault;
> >>
> >>          perfc_incr(mshv_rdmsr_apic_frequency);
> >>          *val = 1000000000ull / APIC_BUS_CYCLE_NS;
> >> @@ -757,7 +754,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >>
> >>      case HV_X64_MSR_REFERENCE_TSC:
> >>          if ( !(viridian_feature_mask(d) & HVMPV_reference_tsc) )
> >> -            return 0;
> > I have a recollection that for at least one version of Windows, when debug
> mode is enabled, it reads the reference TSC MSR regardless of whether the
> feature is enabled or not so this change may well cause guest boot failures.
> > In general I would be wary of #GP faulting where the current code does
> not. I think the current code is almost certainly too liberal even in the 
> face of
> buggy versions of Windows but the new code might be too conservative. It
> will need some testing.
> >
> > In principle though...
> >
> > Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> 
> The current code is absolutely wrong, because it falls back into the
> default path and continues looking for a result.  On the read side, that
> ends up leaking in L$N-1 hypervisor leaves if they are present, while
> the write side ends up discarding the result.
> 
> ISTR it was only one single pre-release build of windows which failed to
> check for the TSC feature, so I'm not sure we need to worry.
> 

No, ISTR that turning debug mode on in recent Windows kernels makes them assume 
they have the timer MSR. (The bad breakage in pre-release Windows 8 was that it 
assumed it had a synIC just because it saw the basic viridian CPUID leaves).

> If we do find that it is a problem in practice, then the correct course
> of action is to explicitly fill with 0 and return X86EMUL_OKAY, which at
> least means that we've dealt with the request.
> 
> I've booted Win7 and Win10 with the code in this state.  Are you happy
> for us to go with this provisionally, and revert back to an explicit
> discard if we encounter problems?
> 

Yes, that's fine. If 7 and 10 are happy (in debug mode) then I'm happy. As you 
say, we'll just deal with any fallout on a case-by-case basis.

Cheers,

  Paul

> ~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.