[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/27] x86/cpuid: Introduce struct cpuid_policy



>>> On 04.01.17 at 13:39, <andrew.cooper3@xxxxxxxxxx> wrote:
> struct cpuid_policy will eventually be a complete replacement for the cpuids[]
> array, with a fixed layout and named fields to allow O(1) access to specific
> information.
> 
> For now, the CPUID content is capped at the 0xd and 0x8000001c leaves, which
> matches the maximum policy that the toolstack will generate for a domain.

Especially (but not only) leaf 0x17 and extended leaf 0x8000001e
make me wonder whether this is a good starting point.

> @@ -67,6 +80,55 @@ static void __init sanitise_featureset(uint32_t *fs)
>                            (fs[FEATURESET_e1d] & ~CPUID_COMMON_1D_FEATURES));
>  }
>  
> +static void __init calculate_raw_policy(void)
> +{
> +    struct cpuid_policy *p = &raw_policy;
> +    unsigned int i;
> +
> +    cpuid_leaf(0, &p->basic.raw[0]);
> +    for ( i = 1; i < min(ARRAY_SIZE(p->basic.raw),
> +                         p->basic.max_leaf + 1ul); ++i )
> +    {
> +        /* Collected later. */
> +        if ( i == 0x7 || i == 0xd )
> +            continue;
> +
> +        cpuid_leaf(i, &p->basic.raw[i]);

Leaves 2, 4, 0xb, and 0xf are, iirc, a multiple invocation ones too.
There should at least be a comment here clarifying why they don't
need treatment similar to 7 and 0xd.

> +    }
> +
> +    if ( p->basic.max_leaf >= 7 )
> +    {
> +        cpuid_count_leaf(7, 0, &p->feat.raw[0]);
> +
> +        for ( i = 1; i < min(ARRAY_SIZE(p->feat.raw),
> +                             p->feat.max_subleaf + 1ul); ++i )
> +            cpuid_count_leaf(7, i, &p->feat.raw[i]);
> +    }
> +
> +    if ( p->basic.max_leaf >= 0xd )
> +    {
> +        uint64_t xstates;
> +
> +        cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
> +        cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
> +
> +        xstates = ((uint64_t)(p->xstate.xcr0_high | p->xstate.xss_high) << 
> 32) |
> +            (p->xstate.xcr0_low | p->xstate.xss_low);
> +
> +        for ( i = 2; i < 63; ++i )
> +        {
> +            if ( xstates & (1u << i) )

1ull

> @@ -65,6 +66,78 @@ extern struct cpuidmasks cpuidmask_defaults;
>  /* Whether or not cpuid faulting is available for the current domain. */
>  DECLARE_PER_CPU(bool, cpuid_faulting_enabled);
>  
> +#define CPUID_GUEST_NR_BASIC      (0xdu + 1)
> +#define CPUID_GUEST_NR_FEAT       (0u + 1)
> +#define CPUID_GUEST_NR_XSTATE     (62u + 1)
> +#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
> +#define CPUID_GUEST_NR_EXTD_AMD   (0x1cu + 1)
> +#define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
> +                                      CPUID_GUEST_NR_EXTD_AMD)
> +
> +struct cpuid_policy
> +{
> +    /*
> +     * WARNING: During the CPUID transition period, not all information here
> +     * is accurate.  The following items are accurate, and can be relied 
> upon.
> +     *
> +     * Global *_policy objects:
> +     *
> +     * - Host accurate:
> +     *   - max_{,sub}leaf
> +     *   - {xcr0,xss}_{high,low}
> +     *
> +     * - Guest appropriate:
> +     *   - Nothing

I don't understand the meaning of the "accurate" above and
the "appropriate" here.

> +     *
> +     * Everything else should be considered inaccurate, and not necesserily 
> 0.
> +     */
> +
> +    /* Basic leaves: 0x000000xx */
> +    union {
> +        struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
> +        struct {
> +            /* Leaf 0x0 - Max and vendor. */
> +            struct {
> +                uint32_t max_leaf, :32, :32, :32;

These unnamed bitfields are here solely for the BUILD_BUG_ON()s?
Wouldn't it make sense to omit them here, and use > instead of !=
there? Also is there really value in nesting unnamed structures like
this?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.