[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 07/27] x86/cpuid: Recalculate a domains CPUID policy when appropriate



On 04/01/17 15:01, Jan Beulich wrote:
>>>> On 04.01.17 at 13:39, <andrew.cooper3@xxxxxxxxxx> wrote:
>> +void recalculate_cpuid_policy(struct domain *d)
>> +{
>> +    struct cpuid_policy *p = d->arch.cpuid;
>> +    const struct cpuid_policy *max =
>> +        is_pv_domain(d) ? &pv_max_policy : &hvm_max_policy;
>> +    uint32_t fs[FSCAPINTS], max_fs[FSCAPINTS];
>> +    unsigned int i;
>> +
>> +    cpuid_policy_to_featureset(p, fs);
>> +    memcpy(max_fs, max->fs, sizeof(max_fs));
>> +
>> +    /* Allow a toolstack to possibly select ITSC... */
>> +    if ( cpu_has_itsc )
>> +        __set_bit(X86_FEATURE_ITSC, max_fs);
> This special casing calls for some explanation in the commit message
> (or the comment here).

Ah - this logic is all copied from the current dynamic adjustment we
make in {pv,hvm}_cpuid().  This ITSC one is expressed differently, but
it should hopefully be obvious in the context of patches 18 and 19.

I will adjust the commit message.

>
>> +    for ( i = 0; i < ARRAY_SIZE(fs); i++ )
>> +        fs[i] &= max_fs[i];
>> +
>> +    if ( is_pv_32bit_domain(d) )
>> +    {
>> +        __clear_bit(X86_FEATURE_LM, fs);
>> +        if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD )
>> +            __clear_bit(X86_FEATURE_SYSCALL, fs);
>> +    }
>> +
>> +    if ( is_hvm_domain(d) && !hap_enabled(d) )
>> +    {
>> +        for ( i = 0; i < ARRAY_SIZE(fs); i++ )
>> +            fs[i] &= hvm_shadow_featuremask[i];
>> +    }
> Wouldn't this better go into the other loop ANDing fs[i]?

Maybe (although that would put a conditional in the middle of a tight loop).

I think that answer will depend on how much other content ends up here,
and I haven't finished replacing hvm_cpuid() yet.

>
>> +    /* ... but hide ITSC in the common case. */
>> +    if ( !d->disable_migrate && !d->arch.vtsc )
>> +        __clear_bit(X86_FEATURE_ITSC, fs);
> The 32-bit PV logic could easily move below here afaics, reducing
> the distance between the two parts of the comment.
>
> Also this requires adjustment of the policy by (the caller of)
> tsc_set_info().

And also XEN_DOMCTL_set_disable_migrate.

Currently the various toolstacks issues these hypercalls in the correct
order, so I was planning to ignore these edge cases until the toolstack
side work (see below).

>
>>  static void update_domain_cpuid_info(struct domain *d,
>>                                       const xen_domctl_cpuid_t *ctl)
>>  {
>> +    struct cpuid_policy *p = d->arch.cpuid;
>> +    struct cpuid_leaf leaf = { ctl->eax, ctl->ebx, ctl->ecx, ctl->edx };
>> +
>> +    if ( ctl->input[0] < ARRAY_SIZE(p->basic.raw) )
>> +    {
>> +        if ( ctl->input[0] == 7 )
>> +        {
>> +            if ( ctl->input[1] < ARRAY_SIZE(p->feat.raw) )
>> +                p->feat.raw[ctl->input[1]] = leaf;
>> +        }
>> +        else if ( ctl->input[0] == 0xd )
>> +        {
>> +            if ( ctl->input[1] < ARRAY_SIZE(p->xstate.raw) )
>> +                p->xstate.raw[ctl->input[1]] = leaf;
>> +        }
>> +        else
>> +            p->basic.raw[ctl->input[0]] = leaf;
>> +    }
>> +    else if ( (ctl->input[0] - 0x80000000) < ARRAY_SIZE(p->extd.raw) )
>> +        p->extd.raw[ctl->input[0] - 0x80000000] = leaf;
> These checks against ARRAY_SIZE() worry me - wouldn't we better
> refuse any attempts to set values not representable in the policy?

We can't do that yet, without toolstack side changes.  Currently the
toolstack can lodge any values it wishes, and all we do is ignore them,
which can be arbitrary information from a cpuid= clause.

The plan (which will fix construction of domains when dom0 isn't a PV
guest), will involve introducing a new DOMCTL_get_cpuid_policy (which
the toolstack will modify to its taste) and present back to Xen, at
which point I will tighten up the hypercall interfaces to require a
sensible policy from the toolstack.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.