[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/13] libx86: Introduce a helper to serialise a cpuid_policy object



>>> On 04.07.18 at 18:46, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 04/07/18 10:01, Jan Beulich wrote:
>>>>> On 03.07.18 at 22:55, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> --- a/xen/common/libx86/cpuid.c
>>> +++ b/xen/common/libx86/cpuid.c
>>> @@ -34,6 +34,100 @@ const uint32_t *x86_cpuid_lookup_deep_deps(uint32_t 
>>> feature)
>>>  }
>>>  
>>>  /*
>>> + * Copy a single cpuid_leaf into a provided xen_cpuid_leaf_t buffer,
>>> + * performing boundary checking against the buffer size.
>>> + */
>>> +static int copy_leaf_to_buffer(uint32_t leaf, uint32_t subleaf,
>>> +                               const struct cpuid_leaf *data,
>>> +                               cpuid_leaf_buffer_t leaves,
>>> +                               uint32_t *curr_entry, const uint32_t 
>>> nr_entries)
>>> +{
>>> +    const xen_cpuid_leaf_t val = {
>>> +        leaf, subleaf, data->a, data->b, data->c, data->d,
>>> +    };
>>> +
>>> +    if ( *curr_entry == nr_entries )
>>> +        return -ENOBUFS;
>>> +
>>> +    if ( copy_to_buffer_offset(leaves, *curr_entry, &val, 1) )
>>> +        return -EFAULT;
>>> +
>>> +    ++*curr_entry;
>> Following on from what Wei has said - you don't mean to have a way
>> here then to indicate to a higher up caller how many slots would have
>> been needed?
> 
> I don't understand your query.  An individual build has a compile-time
> static maximum number of leaves, and this number can be obtained in the
> usual way by making a hypercall with a NULL guest handle.

My point is that this generally is a sub-optimal interface. Seeing how
closely tied libxc is to a specific hypervisor build (or at least version),
I don't see why the caller couldn't set up a suitably sized array without
first querying with a null handle, and only re-issue the call in the
unlikely event that actually a larger buffer is necessary.

> The external representation must not encode this number, as it will
> change build to build, hardware to hardware, and in such times as we
> gain a load of new features in microcode.

Of course.

>>> +int x86_cpuid_copy_to_buffer(const struct cpuid_policy *p,
>>> +                             cpuid_leaf_buffer_t leaves,
>>> +                             uint32_t *nr_entries_p)
>>> +{
>>> +    const uint32_t nr_entries = *nr_entries_p;
>>> +    uint32_t curr_entry = 0, leaf, subleaf;
>>> +
>>> +#define COPY_LEAF(l, s, data)                                       \
>>> +    ({  int ret;                                                    \
>>> +        if ( (ret = copy_leaf_to_buffer(                            \
>>> +                  l, s, data, leaves, &curr_entry, nr_entries)) )   \
>>> +            return ret;                                             \
>>> +    })
>>> +
>>> +    /* Basic leaves. */
>>> +    for ( leaf = 0; leaf <= MIN(p->basic.max_leaf,
>>> +                                ARRAY_SIZE(p->basic.raw) - 1); ++leaf )
>>> +    {
>>> +        switch ( leaf )
>>> +        {
>>> +        case 0x4:
>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->cache.raw); 
>>> ++subleaf )
>>> +                COPY_LEAF(leaf, subleaf, &p->cache.raw[subleaf]);
>>> +            break;
>>> +
>>> +        case 0x7:
>>> +            for ( subleaf = 0;
>>> +                  subleaf <= MIN(p->feat.max_subleaf,
>>> +                                 ARRAY_SIZE(p->feat.raw) - 1); ++subleaf )
>>> +                COPY_LEAF(leaf, subleaf, &p->feat.raw[subleaf]);
>>> +            break;
>>> +
>>> +        case 0xb:
>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->topo.raw); 
>>> ++subleaf )
>>> +                COPY_LEAF(leaf, subleaf, &p->topo.raw[subleaf]);
>>> +            break;
>>> +
>>> +        case 0xd:
>>> +            for ( subleaf = 0; subleaf < ARRAY_SIZE(p->xstate.raw); 
>>> ++subleaf )
>>> +                COPY_LEAF(leaf, subleaf, &p->xstate.raw[subleaf]);
>>> +            break;
>>> +
>>> +        default:
>>> +            COPY_LEAF(leaf, XEN_CPUID_NO_SUBLEAF, &p->basic.raw[leaf]);
>>> +            break;
>>> +        }
>>> +    }
>>> +
>>> +    COPY_LEAF(0x40000000, XEN_CPUID_NO_SUBLEAF,
>>> +              &(struct cpuid_leaf){ p->hv_limit });
>>> +    COPY_LEAF(0x40000100, XEN_CPUID_NO_SUBLEAF,
>>> +              &(struct cpuid_leaf){ p->hv2_limit });
>> Is it a good idea to produce wrong (zero) EBX, ECX, and EDX values here?
> 
> The handling of these leaves currently problematic, and this patch is
> bug-compatible with how DOMCTL_set_cpuid currently behaves (See
> update_domain_cpuid_info()).
> 
> Annoyingly, I need this marshalling series implemented before I can fix
> the hypervisor leaves to use the "new" CPUID infrastructure; the main
> complication being because of the dynamic location of the Xen leaves.

Well, okay, but I'd prefer if such restrictions and bug-compatibilities
were spelled out in the commit message.

> Eventually, the interface will be that Xen leaves live at 0x40000000 and
> the toolstack can manipulate a subset of the information by providing
> leaves in the usual manor.  To enable viridian, the toolstack writes
> HyperV's signature at 0x40000000, and Xen's at 0x40000100.  This also
> allows for a mechanism to hide the Xen CPUID leaves by writing a 0 max leaf.
> 
> Amongst other things, this will allow sensible control of the Viridian
> features without having to squeeze more bits into the HVMPARAM.

Ah, interesting - you basically mean to deprecate the current way of
configuring Viridian features then, if I get this right?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.