[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.11] x86/cacheattr: fix mtrr_pat_not_equal



On Thu, May 17, 2018 at 04:44:04AM -0600, Jan Beulich wrote:
> >>> On 17.05.18 at 11:48, <roger.pau@xxxxxxxxxx> wrote:
> > The function is supposed to return whether the MTRR and PAT state of
> > two CPUs match. Currently this is not properly done because the test
> > for the deftype and the enable bits required both the deftype and the
> > enable bits to be different, while just one of those fields being
> > different can already cause the MTRR states on the vCPU to not match.
> > 
> > Fix this by changing the AND into an OR instead, so that either the
> > deftype or the enabled bits being different will cause the function to
> > return mismatching state.
> 
> This is by far not enough, but I didn't view the function as critical
> enough to warrant sending out the patch I have right away.

I've also realized that the logic there is wonky and would return true
in cases where the states are equal (ie: for example if fixed MTRRs
contents are different but FE is disabled).

Just wanted to do a minimal change that prevents wrongly reporting
that the state is equal when it's not (I think the other way around is
not that critical).

You change LGTM, and fixes some obvious cases where the current code
would return true even if the cache state is the same.

> Jan
> x86/HVM: correct mtrr_pat_not_equal()
> 
> The two vCPU-s differring in MTRR-enabled state means MTRR settings are
> not equal. Both vCPU-s having MTRRs disabled means only PAT needs to be
> compared. Along those lines for fixed range MTRRs. Differring variable
> range counts likewise mean settings are different overall.
> 
> Constify types and convert bool_t to bool.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

> --- unstable.orig/xen/arch/x86/hvm/mtrr.c
> +++ unstable/xen/arch/x86/hvm/mtrr.c
> @@ -476,35 +476,40 @@ bool_t mtrr_var_range_msr_set(
>      return 1;
>  }
>  
> -bool_t mtrr_pat_not_equal(struct vcpu *vd, struct vcpu *vs)
> +bool mtrr_pat_not_equal(const struct vcpu *vd, const struct vcpu *vs)
>  {
> -    struct mtrr_state *md = &vd->arch.hvm_vcpu.mtrr;
> -    struct mtrr_state *ms = &vs->arch.hvm_vcpu.mtrr;
> -    int32_t res;
> -    uint8_t num_var_ranges = (uint8_t)md->mtrr_cap;
> -
> -    /* Test fixed ranges. */
> -    res = memcmp(md->fixed_ranges, ms->fixed_ranges,
> -            NUM_FIXED_RANGES*sizeof(mtrr_type));
> -    if ( res )
> -        return 1;
> -
> -    /* Test var ranges. */
> -    res = memcmp(md->var_ranges, ms->var_ranges,
> -            num_var_ranges*sizeof(struct mtrr_var_range));
> -    if ( res )
> -        return 1;
> -
> -    /* Test default type MSR. */
> -    if ( (md->def_type != ms->def_type)
> -            && (md->enabled != ms->enabled) )
> -        return 1;
> +    const struct mtrr_state *md = &vd->arch.hvm_vcpu.mtrr;
> +    const struct mtrr_state *ms = &vs->arch.hvm_vcpu.mtrr;
>  
> -    /* Test PAT. */
> -    if ( vd->arch.hvm_vcpu.pat_cr != vs->arch.hvm_vcpu.pat_cr )
> -        return 1;
> +    if ( (md->enabled ^ ms->enabled) & 2 )
> +        return true;
> +
> +    if ( md->enabled & 2 )
> +    {
> +        unsigned int num_var_ranges = (uint8_t)md->mtrr_cap;
> +
> +        /* Test default type MSR. */
> +        if ( md->def_type != ms->def_type )
> +            return true;
> +
> +        /* Test fixed ranges. */
> +        if ( (md->enabled ^ ms->enabled) & 1 )
> +            return true;
> +
> +        if ( (md->enabled & 1) &&
> +             memcmp(md->fixed_ranges, ms->fixed_ranges,
> +                    sizeof(md->fixed_ranges)) )
> +            return true;
> +
> +        /* Test variable ranges. */
> +        if ( num_var_ranges != (uint8_t)ms->mtrr_cap ||

Is it really possible to have two vCPUs on the same domain with a
different number of variable ranges?

> +             memcmp(md->var_ranges, ms->var_ranges,
> +                    num_var_ranges * sizeof(*md->var_ranges)) )
> +            return true;
> +    }
>  
> -    return 0;
> +    /* Test PAT. */
> +    return vd->arch.hvm_vcpu.pat_cr != vs->arch.hvm_vcpu.pat_cr;
>  }
>  
>  struct hvm_mem_pinned_cacheattr_range {
> --- unstable.orig/xen/include/asm-x86/mtrr.h
> +++ unstable/xen/include/asm-x86/mtrr.h
> @@ -92,6 +92,6 @@ extern void memory_type_changed(struct d
>  extern bool_t pat_msr_set(uint64_t *pat, uint64_t msr);
>  
>  bool_t is_var_mtrr_overlapped(const struct mtrr_state *m);
> -bool_t mtrr_pat_not_equal(struct vcpu *vd, struct vcpu *vs);
> +bool mtrr_pat_not_equal(const struct vcpu *vd, const struct vcpu *vs);
>  
>  #endif /* __ASM_X86_MTRR_H__ */
> 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.