[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 26/31] xen/x86: Rework AMD masking MSR setup



On 22/01/16 11:13, Jan Beulich wrote:
>>>> On 22.01.16 at 12:01, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 22/01/16 09:27, Jan Beulich wrote:
>>>>>> On 16.12.15 at 22:24, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> +         expected_levelling_cap, levelling_caps,
>>>> +         (expected_levelling_cap ^ levelling_caps) & levelling_caps);
>>>> +  printk(XENLOG_WARNING "Fam %#x, model %#x level %#x\n",
>>>> +         c->x86, c->x86_model, c->cpuid_level);
>>>> +  printk(XENLOG_WARNING
>>>> +         "If not running virtualised, please report a bug\n");
>>> Well - you checked for running virtualized, so making it here when
>>> running virtualized is already a bug (albeit not in the code here)?
>> Why would it be a bug?  The virtualising environment might provide these
>> MSRs, in which case we should use them.
> Because you won't make it here when cpu_has_hypervisor.

Not all hypervisors advertise themselves with the hypervisor bit.  Some
versions of HyperV and ESXi will panic if they see a hypervisor bit, so
virtualisation software needs to hide the hypervisor bit to successfully
visualise some software.

The user reading Xen's error message is in a better position to judge
when Xen is actually virtualised, because Xen is not able to say with
certainty.

>
>>>> +void amd_ctxt_switch_levelling(const struct domain *nextd)
>>>> +{
>>>> +  struct cpumasks *these_masks = &this_cpu(cpumasks);
>>>> +  const struct cpumasks *masks = &cpumask_defaults;
>>>> +
>>>> +#define LAZY(cap, msr, field)                                             
>>>> \
>>>> +  ({                                                              \
>>>> +          if ( ((levelling_caps & cap) == cap) &&                 \
>>>> +               (these_masks->field != masks->field) )             \
>>>> +          {                                                       \
>>>> +                  wrmsr_amd(msr, masks->field);                   \
>>>> +                  these_masks->field = masks->field;              \
>>>> +          }                                                       \
>>>> +  })
>>>> +
>>>> +  LAZY(LCAP_1cd,  MSR_K8_FEATURE_MASK,       _1cd);
>>>> +  LAZY(LCAP_e1cd, MSR_K8_EXT_FEATURE_MASK,   e1cd);
>>>> +  LAZY(LCAP_7ab0, MSR_AMD_L7S0_FEATURE_MASK, _7ab0);
>>>> +  LAZY(LCAP_6c,   MSR_AMD_THRM_FEATURE_MASK, _6c);
>>> So here we already have the first example where fully consistent
>>> naming would allow elimination of a macro parameter.
>> Token concatenation deliberately obscures code from tool like grep and
>> cscope.  There is already too much of the Xen source code obscured like
>> this; I'd prefer not to add to it.
> I'm not sure grep-abilty is more important than code readability.
> My personal opinion surely is that the latter is more important.

It doesn't matter how readable the code is if you can't find it again later.

And in this case, the difference between 2 and 3 macro parameters
doesn't affect the readability of the code.  All context is available in
the preceding 10 lines.

>
>>> And then, how is this supposed to work? You only restore defaults,
>>> but never write non-default values. Namely, nextd is an unused
>>> function parameter ...
>>>
>>> Also I guess my comment about adding unused code needs
>>> repeating here.
>> Future patches build on this, both using the parameter, and not using
>> the defaults.
>>
>> I am also certain that if I did two patches, the first putting in a void
>> function, and the second changing it to a pointer, your review would ask
>> me to turn it into this.
> Well, I realize things will all make sense by the end of the series, but
> honestly in some of the cases I'm not sure the split between patches
> was well done.

If you can suggest a better ordering then I am all ears.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.