|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 6/6] xen/x86: Add topology generator
On Tue, Jan 09, 2024 at 03:38:34PM +0000, Alejandro Vallejo wrote:
> This allows toolstack to synthesise sensible topologies for guests. In
> particular, this patch causes x2APIC IDs to be packed according to the
> topology now exposed to the guests on leaf 0xb.
>
> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx>
> ---
> tools/include/xenguest.h | 15 ++++
> tools/libs/guest/xg_cpuid_x86.c | 144 ++++++++++++++++++++------------
> xen/arch/x86/cpu-policy.c | 6 +-
> 3 files changed, 107 insertions(+), 58 deletions(-)
>
> diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
> index 4e9078fdee..f0043c559b 100644
> --- a/tools/include/xenguest.h
> +++ b/tools/include/xenguest.h
> @@ -843,5 +843,20 @@ enum xc_static_cpu_featuremask {
> XC_FEATUREMASK_HVM_HAP_DEF,
> };
> const uint32_t *xc_get_static_cpu_featuremask(enum
> xc_static_cpu_featuremask);
> +
> +/**
> + * Synthesise topology information in `p` given high-level constraints
> + *
> + * Topology is given in various fields accross several leaves, some of
> + * which are vendor-specific. This function uses the policy itself to
> + * derive such leaves from threads/core and cores/package.
> + *
> + * @param p CPU policy of the domain.
> + * @param threads_per_core threads/core. Doesn't need to be a power of 2.
> + * @param cores_per_package cores/package. Doesn't need to be a power of 2.
> + */
> +void xc_topo_from_parts(struct cpu_policy *p,
> + uint32_t threads_per_core, uint32_t cores_per_pkg);
I think you can use plain unsigned int here.
> +
> #endif /* __i386__ || __x86_64__ */
> #endif /* XENGUEST_H */
> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> index 4453178100..7a622721be 100644
> --- a/tools/libs/guest/xg_cpuid_x86.c
> +++ b/tools/libs/guest/xg_cpuid_x86.c
> @@ -584,7 +584,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t
> domid, bool restore,
> bool hvm;
> xc_domaininfo_t di;
> struct xc_cpu_policy *p = xc_cpu_policy_init();
> - unsigned int i, nr_leaves = ARRAY_SIZE(p->leaves), nr_msrs = 0;
> + unsigned int nr_leaves = ARRAY_SIZE(p->leaves), nr_msrs = 0;
> uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
> uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
> uint32_t len = ARRAY_SIZE(host_featureset);
> @@ -727,60 +727,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t
> domid, bool restore,
> }
> else
> {
> - /*
> - * Topology for HVM guests is entirely controlled by Xen. For now,
> we
> - * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT.
> - */
> - p->policy.basic.htt = true;
> - p->policy.extd.cmp_legacy = false;
> -
> - /*
> - * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package.
> - * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid
> - * overflow.
> - */
> - if ( !p->policy.basic.lppp )
> - p->policy.basic.lppp = 2;
> - else if ( !(p->policy.basic.lppp & 0x80) )
> - p->policy.basic.lppp *= 2;
> -
> - switch ( p->policy.x86_vendor )
> - {
> - case X86_VENDOR_INTEL:
> - for ( i = 0; (p->policy.cache.subleaf[i].type &&
> - i < ARRAY_SIZE(p->policy.cache.raw)); ++i )
> - {
> - p->policy.cache.subleaf[i].cores_per_package =
> - (p->policy.cache.subleaf[i].cores_per_package << 1) | 1;
> - p->policy.cache.subleaf[i].threads_per_cache = 0;
> - }
> - break;
> -
> - case X86_VENDOR_AMD:
> - case X86_VENDOR_HYGON:
> - /*
> - * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize.
> - * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one).
> - * Update to reflect vLAPIC_ID = vCPU_ID * 2. But avoid
> - * - overflow,
> - * - going out of sync with leaf 1 EBX[23:16],
> - * - incrementing ApicIdCoreSize when it's zero (which changes
> the
> - * meaning of bits 7:0).
> - *
> - * UPDATE: I addition to avoiding overflow, some
> - * proprietary operating systems have trouble with
> - * apic_id_size values greater than 7. Limit the value to
> - * 7 for now.
> - */
> - if ( p->policy.extd.nc < 0x7f )
> - {
> - if ( p->policy.extd.apic_id_size != 0 &&
> p->policy.extd.apic_id_size < 0x7 )
> - p->policy.extd.apic_id_size++;
> -
> - p->policy.extd.nc = (p->policy.extd.nc << 1) | 1;
> - }
> - break;
> - }
> + /* TODO: Expose the ability to choose a custom topology for HVM/PVH
> */
> + xc_topo_from_parts(&p->policy, 1, di.max_vcpu_id + 1);
It would be better if this was split into two commits. First commit
would move the code as-is into xc_topo_from_parts() without any
changes. Second patch would do the functional changes. That was it's
far easier to spot what are the relevant changes vs pure code
movement.
> }
>
> nr_leaves = ARRAY_SIZE(p->leaves);
> @@ -1028,3 +976,89 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch,
> xc_cpu_policy_t *host,
>
> return false;
> }
> +
> +static uint32_t order(uint32_t n)
> +{
> + return 32 - __builtin_clz(n);
> +}
> +
> +void xc_topo_from_parts(struct cpu_policy *p,
> + uint32_t threads_per_core, uint32_t cores_per_pkg)
> +{
> + uint32_t threads_per_pkg = threads_per_core * cores_per_pkg;
> + uint32_t apic_id_size;
> +
> + if ( p->basic.max_leaf < 0xb )
> + p->basic.max_leaf = 0xb;
> +
> + memset(p->topo.raw, 0, sizeof(p->topo.raw));
> +
> + /* thread level */
> + p->topo.subleaf[0].nr_logical = threads_per_core;
> + p->topo.subleaf[0].id_shift = 0;
> + p->topo.subleaf[0].level = 0;
> + p->topo.subleaf[0].type = 1;
> + if ( threads_per_core > 1 )
> + p->topo.subleaf[0].id_shift = order(threads_per_core - 1);
> +
> + /* core level */
> + p->topo.subleaf[1].nr_logical = cores_per_pkg;
> + if ( p->x86_vendor == X86_VENDOR_INTEL )
> + p->topo.subleaf[1].nr_logical = threads_per_pkg;
> + p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift;
> + p->topo.subleaf[1].level = 1;
> + p->topo.subleaf[1].type = 2;
> + if ( cores_per_pkg > 1 )
> + p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1);
I was kind of expecting this to be part of cpu-policy rather than xc,
as we could then also test this like you do test
x86_x2apic_id_from_vcpu_id().
It could also be used to populate the topologies in the tests
themselves.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |