[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition



(Sorry for webmail).

The forthcoming hotfix on Win10/Server2019 (Build  20270) is in serious 
problems without these two fixes, and never starts secondary processors.

~Andrew

-----Original Message-----
From: Paul Durrant <xadimgnik@xxxxxxxxx> 
Sent: Friday, January 8, 2021 8:32 AM
To: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
Cc: wl@xxxxxxx; iwj@xxxxxxxxxxxxxx; Anthony Perard <anthony.perard@xxxxxxxxxx>; 
Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap 
<George.Dunlap@xxxxxxxxxx>; jbeulich@xxxxxxxx; julien@xxxxxxx; 
sstabellini@xxxxxxxxxx; Roger Pau Monne <roger.pau@xxxxxxxxxx>
Subject: RE: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

> -----Original Message-----
> From: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> Sent: 08 January 2021 00:47
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: paul@xxxxxxx; wl@xxxxxxx; iwj@xxxxxxxxxxxxxx; 
> anthony.perard@xxxxxxxxxx; andrew.cooper3@xxxxxxxxxx; 
> george.dunlap@xxxxxxxxxx; jbeulich@xxxxxxxx; julien@xxxxxxx; 
> sstabellini@xxxxxxxxxx; roger.pau@xxxxxxxxxx; Igor Druzhinin 
> <igor.druzhinin@xxxxxxxxxx>
> Subject: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per 
> partition
> 
> TLFS 7.8.1 stipulates that "a virtual processor index must be less 
> than the maximum number of virtual processors per partition" that "can 
> be obtained through CPUID leaf 0x40000005". Furthermore, "Requirements 
> for Implementing the Microsoft Hypervisor Interface" defines that 
> starting from Windows Server 2012, which allowed more than 64 CPUs to 
> be brought up, this leaf can now contain a value -1 basically assuming 
> the hypervisor has no restriction while
> 0 (that we currently expose) means the default restriction is still present.
> 
> Along with previous changes exposing ExProcessorMasks this allows a 
> recent Windows VM with Viridian extension enabled to have more than 64 
> vCPUs without going into immediate BSOD.
> 

This is very odd as I was happily testing with a 128 vCPU VM once 
ExProcessorMasks was implemented... no need for the extra leaf.
The documentation for 0x40000005 states " Describes the scale limits supported 
in the current hypervisor implementation. If any value is zero, the hypervisor 
does not expose the corresponding information". It does not say (in section 
7.8.1 or elsewhere AFAICT) what implications that has for VP_INDEX.

In " Requirements for Implementing the Microsoft Hypervisor Interface" I don't 
see anything saying what the semantics of not implementing leaf 0x40000005 are, 
only that if implementing it then -1 must be used to break the 64 VP limit. It 
also says that reporting -1 in 0x40000005 means that 40000004.EAX bits 1 and 2 
*must* be clear, which is clearly not true if ExProcessorMasks is enabled. That 
document is dated June 13th 2012 so I think it should be taken with a pinch of 
salt.

Have you actually observed a BSOD with >64 vCPUs when ExProcessorMasks is 
enabled? If so, which version of Windows? I'd like to get a repro myself.

  Paul

> Since we didn't expose the leaf before and to keep CPUID data 
> consistent for incoming streams from previous Xen versions - let's keep it 
> behind an option.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> ---
>  tools/libs/light/libxl_x86.c         |  2 +-
>  xen/arch/x86/hvm/viridian/viridian.c | 23 +++++++++++++++++++++++
>  xen/include/public/hvm/params.h      |  7 ++++++-
>  3 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_x86.c 
> b/tools/libs/light/libxl_x86.c index 86d2729..96c8bf1 100644
> --- a/tools/libs/light/libxl_x86.c
> +++ b/tools/libs/light/libxl_x86.c
> @@ -336,7 +336,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, 
> uint32_t domid,
>          LOG(DETAIL, "%s group enabled", 
> libxl_viridian_enlightenment_to_string(v));
> 
>      if (libxl_bitmap_test(&enlightenments, 
> LIBXL_VIRIDIAN_ENLIGHTENMENT_BASE)) {
> -        mask |= HVMPV_base_freq;
> +        mask |= HVMPV_base_freq | HVMPV_no_vp_limit;
> 
>          if (!libxl_bitmap_test(&enlightenments, 
> LIBXL_VIRIDIAN_ENLIGHTENMENT_FREQ))
>              mask |= HVMPV_no_freq;
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c 
> b/xen/arch/x86/hvm/viridian/viridian.c
> index ed97804..ae1ea86 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -209,6 +209,29 @@ void cpuid_viridian_leaves(const struct vcpu *v, 
> uint32_t leaf,
>          res->b = viridian_spinlock_retry_count;
>          break;
> 
> +    case 5:
> +        /*
> +         * From "Requirements for Implementing the Microsoft Hypervisor
> +         *  Interface":
> +         *
> +         * "On Windows operating systems versions through Windows Server
> +         * 2008 R2, reporting the HV#1 hypervisor interface limits
> +         * the Windows virtual machine to a maximum of 64 VPs, regardless of
> +         * what is reported via CPUID.40000005.EAX.
> +         *
> +         * Starting with Windows Server 2012 and Windows 8, if
> +         * CPUID.40000005.EAX containsa value of -1, Windows assumes that
> +         * the hypervisor imposes no specific limit to the number of VPs.
> +         * In this case, Windows Server 2012 guest VMs may use more than 64
> +         * VPs, up to the maximum supported number of processors applicable
> +         * to the specific Windows version being used."
> +         *
> +         * For compatibility we hide it behind an option.
> +         */
> +        if ( viridian_feature_mask(d) & HVMPV_no_vp_limit )
> +            res->a = -1;
> +        break;
> +
>      case 6:
>          /* Detected and in use hardware features. */
>          if ( cpu_has_vmx_virtualize_apic_accesses ) diff --git 
> a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h 
> index 3b0a0f4..805f4ca 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -168,6 +168,10 @@
>  #define _HVMPV_ex_processor_masks 10
>  #define HVMPV_ex_processor_masks (1 << _HVMPV_ex_processor_masks)
> 
> +/* Allow more than 64 VPs */
> +#define _HVMPV_no_vp_limit 11
> +#define HVMPV_no_vp_limit (1 << _HVMPV_no_vp_limit)
> +
>  #define HVMPV_feature_mask \
>          (HVMPV_base_freq | \
>           HVMPV_no_freq | \
> @@ -179,7 +183,8 @@
>           HVMPV_synic | \
>           HVMPV_stimer | \
>           HVMPV_hcall_ipi | \
> -         HVMPV_ex_processor_masks)
> +         HVMPV_ex_processor_masks | \
> +         HVMPV_no_vp_limit)
> 
>  #endif
> 
> --
> 2.7.4





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.