 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/cpuid: prevent decreasing of hypervisor max leaf on migration
 On 27.10.2021 16:00, Roger Pau Monne wrote:
> In order to be compatible with previous Xen versions, and not change
> max hypervisor leaf as a result of a migration, keep the clamping of
> the maximum leaf value provided to XEN_CPUID_MAX_NUM_LEAVES, instead
> of doing it based on the domain type. Also set the default maximum
> leaf without taking the domain type into account. The maximum
> hypervisor leaf is not migrated, so we need the default to not regress
> beyond what might already be reported to a guest by existing Xen
> versions.
While this is the missing description to the patch I had submitted
back in May upon Andrew's request, I have to admit that I don't
consider this a satisfactory explanation. Shouldn't hypervisor
leaves undergo similar leveling as other leaves? I.e. upon
migration leaves or individual bits should neither disappear nor
appear?
I continue to consider it at least suspicious that HVM guests get
5 leaves reported when only 4 are really meaningful to them. I
see this has gone in, so I'm likely to trip up on this again in
the future. Might result in the same patch again then if I don't
end up doing archeology at that point ...
Jan
> This is a partial revert of 540d911c28 and restores the previous
> behaviour and assures that HVM guests won't see it's maximum
> hypervisor leaf reduced from 5 to 4 as a result of a migration.
> 
> Fixes: 540d911c28 ('x86/CPUID: shrink max_{,sub}leaf fields according to 
> actual leaf contents')
> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Cc: Ian Jackson <iwj@xxxxxxxxxxxxxx>
> ---
> Regarding release risks:
> 
> This is a partial revert of a commit.  The main issues could be that a
> partial revert could break the build or leave the remaining code in a
> non-working condition.
> 
> Breaking the build will be easily discovered by our automated testing,
> while leaving the remaining code in a broken state is unlikely, as the
> chunks reverted are isolated from the rest of the change in
> 540d911c28.
> ---
>  xen/arch/x86/traps.c                | 6 ++----
>  xen/include/public/arch-x86/cpuid.h | 6 +-----
>  2 files changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index a1c2adb7ad..79fd276a41 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1086,15 +1086,13 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, 
> uint32_t leaf,
>      uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
>      uint32_t idx  = leaf - base;
>      unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
> -    unsigned int dflt = is_pv_domain(d) ? XEN_CPUID_MAX_PV_NUM_LEAVES
> -                                        : XEN_CPUID_MAX_HVM_NUM_LEAVES;
>  
>      if ( limit == 0 )
>          /* Default number of leaves */
> -        limit = dflt;
> +        limit = XEN_CPUID_MAX_NUM_LEAVES;
>      else
>          /* Clamp toolstack value between 2 and MAX_NUM_LEAVES. */
> -        limit = min(max(limit, 2u), dflt);
> +        limit = min(max(limit, 2u), XEN_CPUID_MAX_NUM_LEAVES + 0u);
>  
>      if ( idx > limit )
>          return;
> diff --git a/xen/include/public/arch-x86/cpuid.h 
> b/xen/include/public/arch-x86/cpuid.h
> index 00926b1fef..ce46305bee 100644
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -113,10 +113,6 @@
>  /* Max. address width in bits taking memory hotplug into account. */
>  #define XEN_CPUID_MACHINE_ADDRESS_WIDTH_MASK (0xffu << 0)
>  
> -#define XEN_CPUID_MAX_PV_NUM_LEAVES 5
> -#define XEN_CPUID_MAX_HVM_NUM_LEAVES 4
> -#define XEN_CPUID_MAX_NUM_LEAVES \
> -    (XEN_CPUID_MAX_PV_NUM_LEAVES > XEN_CPUID_MAX_HVM_NUM_LEAVES ? \
> -     XEN_CPUID_MAX_PV_NUM_LEAVES : XEN_CPUID_MAX_HVM_NUM_LEAVES)
> +#define XEN_CPUID_MAX_NUM_LEAVES 5
>  
>  #endif /* __XEN_PUBLIC_ARCH_X86_CPUID_H__ */
> 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |