[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/sysctl: Implement XEN_SYSCTL_get_cpuid_policy



On 03/08/17 16:51, Jan Beulich wrote:
>>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 07/27/17 7:48 PM >>>
>> @@ -599,6 +600,93 @@ int init_domain_cpuid_policy(struct domain *d)
>> return 0;
>> }
>  >
>> +/*
>> + * Copy a single cpuid_leaf into a guest-provided xen_cpuid_leaf_t buffer,
>> + * performing boundary checking against the guests array size.
>> + */
>> +static int copy_leaf_to_guest(uint32_t leaf, uint32_t subleaf,
>> +                              const struct cpuid_leaf *data,
>> +                              XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves,
>> +                              uint32_t *curr_leaf, uint32_t nr_leaves)
>> +{
>> +    const xen_cpuid_leaf_t val =
>> +        { leaf, subleaf, data->a, data->b, data->c, data->d };
>> +
>> +    if ( copy_to_guest_offset(leaves, *curr_leaf, &val, 1) )
>> +        return -EFAULT;
>> +
>> +    if ( ++(*curr_leaf) == nr_leaves )
>> +        return -ENOBUFS;
> Why? Either check before copying, or prevent returning an error when
> this last entry precisely fit into the last provided slot. In particular ...
>
>> +int copy_cpuid_policy_to_guest(const struct cpuid_policy *p,
>> +                               XEN_GUEST_HANDLE_64(xen_cpuid_leaf_t) leaves,
>> +                               uint32_t *nr_leaves_p)
>> +{
>> +    uint32_t nr_leaves = *nr_leaves_p, curr_leaf = 0, leaf, subleaf;
>> +
>> +    if ( nr_leaves == 0 )
>> +        return -ENOBUFS;
> ... such an explicit check should not be needed.

Ok.

>
>> +#define COPY_LEAF(l, s, data)                                   \
>> +    ({ int ret; /* Elide leaves which are fully empty. */       \
>> +        if ( (*(uint64_t *)(&(data)->a) |                       \
>> +              *(uint64_t *)(&(data)->c)) &&                     \
> This sort of casting looks rather fragile.

I've already factored it out into:

static bool is_empty_leaf(const struct cpuid_leaf *l)
{
    /*
     * Logically '!(l->a | l->b | l->c | l->d)' but the compiler needs some
     * help realising that its actually looking for 16 bytes of adjacent
     * zeros, and can be far more efficient than using 32bit operations.
     */
    return !(*(uint64_t *)&l->a | *(uint64_t *)&l->c);
}

>
>> --- a/xen/arch/x86/sysctl.c
>> +++ b/xen/arch/x86/sysctl.c
>> @@ -250,6 +250,42 @@ long arch_do_sysctl(
>> break;
>> }
>  >
>> +    case XEN_SYSCTL_get_cpuid_policy:
>> +    {
>> +        static const struct cpuid_policy *const policy_table[] = {
>> +            [XEN_SYSCTL_cpuid_policy_raw]  = &raw_cpuid_policy,
>> +            [XEN_SYSCTL_cpuid_policy_host] = &host_cpuid_policy,
>> +        };
>> +        const struct cpuid_policy *p = NULL;
>> +
>> +        /* Request for maximum number of leaves? */
>> +        if ( guest_handle_is_null(sysctl->u.cpuid_policy.policy) )
>> +        {
>> +            sysctl->u.cpuid_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
>> +            if ( __copy_field_to_guest(u_sysctl, sysctl,
>> +                                       u.cpuid_policy.nr_leaves) )
>> +                ret = -EFAULT;
> Couldn't this be folded with the other copy operation below, at once ...
>
>> +            break;
>> +        }
>> +
>> +        /* Look up requested policy. */
>> +        if ( sysctl->u.cpuid_policy.index < ARRAY_SIZE(policy_table) )
>> +            p = policy_table[sysctl->u.cpuid_policy.index];
>> +
>> +        /* Bad policy index? */
>> +        if ( !p )
>> +            ret = -EINVAL;
>> +        else
>> +            ret = copy_cpuid_policy_to_guest(p, 
>> sysctl->u.cpuid_policy.policy,
>> +                                             
>> &sysctl->u.cpuid_policy.nr_leaves);
>> +
>> +        /* Inform the caller of how many leaves we wrote. */
>> +        if ( !ret )
>> +            ret = __copy_field_to_guest(u_sysctl, sysctl,
>> +                                        u.cpuid_policy.nr_leaves);
> ... avoiding to return other than -EFAULT if the copying fails?

Looks like it can.

>
>> --- a/xen/include/asm-x86/cpuid.h
>> +++ b/xen/include/asm-x86/cpuid.h
>> @@ -70,6 +70,18 @@ DECLARE_PER_CPU(bool, cpuid_faulting_enabled);
>> #define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
>> CPUID_GUEST_NR_EXTD_AMD)
>  >
>> +/*
>> + * Maximum number of leaves a struct cpuid_policy turns into when serialised
>> + * for interaction with the toolstack.  (Sum of all leaves in each union, 
>> less
>> + * the entries in basic which sub-unions hang off of.)
>> + */
>> +#define CPUID_MAX_SERIALISED_LEAVES                     \
>> +    (CPUID_GUEST_NR_BASIC +                             \
>> +     CPUID_GUEST_NR_FEAT - !!CPUID_GUEST_NR_FEAT +      \
>> +     CPUID_GUEST_NR_CACHE - !!CPUID_GUEST_NR_CACHE +    \
>> +     CPUID_GUEST_NR_XSTATE - !!CPUID_GUEST_NR_XSTATE +  \
>> +     CPUID_GUEST_NR_EXTD)
> Why these strange !! uses? Can't you just say "- 1", as these counts all are
> non-zero anyway?

That is probably cleaner.

>
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -692,6 +692,14 @@ struct xen_domctl_cpuid {
>> };
>> typedef struct xen_domctl_cpuid xen_domctl_cpuid_t;
>> DEFINE_XEN_GUEST_HANDLE(xen_domctl_cpuid_t);
>> +
>> +#define XEN_CPUID_NO_SUBLEAF 0xffffffffu
> What if some leaf gains a subleaf with this index?

The only alternative is to use a wider datatype, or rely on out-of-band
knowledge as to which leaves have subleafs.

There's already one problem with 0xb which we currently treat as
non-subleaf.

I'm not sure which of these options I dislike least.

>
>> +struct xen_cpuid_leaf {
>> +    uint32_t leaf, subleaf;
>> +    uint32_t a, b, c, d;
> In the public interface I'd prefer eax etc.
>
> Also I assume you place this in domctl.h because of anticipated use by
> a future domctl?

Yes.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.