>From f26c03ebc1b8ad91a61ce07fd5632ea63f158120 Mon Sep 17 00:00:00 2001 From: George Dunlap Date: Thu, 14 Nov 2019 16:58:34 +0000 Subject: [PATCH] x86: Add hack to disable "Fake HT" mode Changeset ca2eee92df44 ("x86, hvm: Expose host core/HT topology to HVM guests") attempted to "fake up" a topology which would induce guest operating systems to not treat vcpus as sibling hyperthreads. This involved (among other things) actually reporting hyperthreading as available, but giving vcpus every other APICID. The resulting cpu featureset is invalid, but most operating systems on most hardware managed to cope with it. Unfortunately, Windows running on modern AMD hardware -- including Ryzen 3xxx series processors, and reportedly EPYC "Rome" cpus -- gets confused by the resulting contradictory feature bits and crashes during installation. (Linux guests have so far continued to cope.) A "proper" fix is complicated and it's too late to fix it either for 4.13, or to backport to supported branches. As a short-term fix, implement an option to disable this "Fake HT" mode. The resulting topology reported will not be canonical, but experimentally continues to work with Windows guests. However, disabling this "Fake HT" mode has not been widely tested, and will almost certainly break migration if applied inconsistently. To minimize impact while allowing administrators to disable "Fake HT" only on guests which are known not to work without it (i.e., Windows guests) on affected hardware, add an environment variable which can be set to disable the "Fake HT" mode on such hardware. Reported-by: Steven Haigh Reported-by: Andreas Kinzler Signed-off-by: George Dunlap --- This has been compile-tested only; I'm posting it early to get feedback on the approach. TODO: Prevent such guests from being migrated Open questions: - Is this the right place to put the `getenv` check? - Is there any way we can make migration work, at least in some cases? - Can we check for known-problematic models, and at least report a more useful error? CC: Andrew Cooper CC: Jan Beulich CC: Ian Jackson CC: Anthony Perard --- tools/libxc/xc_cpuid_x86.c | 97 +++++++++++++++++++++++--------------- 1 file changed, 58 insertions(+), 39 deletions(-) diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c index 312c481f1e..bc088e45f0 100644 --- a/tools/libxc/xc_cpuid_x86.c +++ b/tools/libxc/xc_cpuid_x86.c @@ -579,52 +579,71 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, } else { - /* - * Topology for HVM guests is entirely controlled by Xen. For now, we - * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT. - */ - p->basic.htt = true; - p->extd.cmp_legacy = false; + if ( !getenv("XEN_LIBXC_DISABLE_FAKEHT") ) { + /* + * Topology for HVM guests is entirely controlled by Xen. For now, we + * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT. + */ + p->basic.htt = true; + p->extd.cmp_legacy = false; - /* - * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package. - * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid - * overflow. - */ - if ( !(p->basic.lppp & 0x80) ) - p->basic.lppp *= 2; + /* + * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package. + * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid + * overflow. + */ + if ( !(p->basic.lppp & 0x80) ) + p->basic.lppp *= 2; - switch ( p->x86_vendor ) - { - case X86_VENDOR_INTEL: - for ( i = 0; (p->cache.subleaf[i].type && - i < ARRAY_SIZE(p->cache.raw)); ++i ) + switch ( p->x86_vendor ) { - p->cache.subleaf[i].cores_per_package = - (p->cache.subleaf[i].cores_per_package << 1) | 1; - p->cache.subleaf[i].threads_per_cache = 0; + case X86_VENDOR_INTEL: + for ( i = 0; (p->cache.subleaf[i].type && + i < ARRAY_SIZE(p->cache.raw)); ++i ) + { + p->cache.subleaf[i].cores_per_package = + (p->cache.subleaf[i].cores_per_package << 1) | 1; + p->cache.subleaf[i].threads_per_cache = 0; + } + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* + * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize. + * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one). + * Update to reflect vLAPIC_ID = vCPU_ID * 2. But avoid + * - overflow, + * - going out of sync with leaf 1 EBX[23:16], + * - incrementing ApicIdCoreSize when it's zero (which changes the + * meaning of bits 7:0). + */ + if ( p->extd.nc < 0x7f ) + { + if ( p->extd.apic_id_size != 0 && p->extd.apic_id_size != 0xf ) + p->extd.apic_id_size++; + + p->extd.nc = (p->extd.nc << 1) | 1; + } + break; + } - break; + } + else + { + p->basic.htt = false; + p->extd.cmp_legacy = false; - case X86_VENDOR_AMD: - case X86_VENDOR_HYGON: - /* - * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize. - * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one). - * Update to reflect vLAPIC_ID = vCPU_ID * 2. But avoid - * - overflow, - * - going out of sync with leaf 1 EBX[23:16], - * - incrementing ApicIdCoreSize when it's zero (which changes the - * meaning of bits 7:0). - */ - if ( p->extd.nc < 0x7f ) + switch ( p->x86_vendor ) { - if ( p->extd.apic_id_size != 0 && p->extd.apic_id_size != 0xf ) - p->extd.apic_id_size++; - - p->extd.nc = (p->extd.nc << 1) | 1; + case X86_VENDOR_INTEL: + for ( i = 0; (p->cache.subleaf[i].type && + i < ARRAY_SIZE(p->cache.raw)); ++i ) + { + p->cache.subleaf[i].cores_per_package = 0; + p->cache.subleaf[i].threads_per_cache = 0; + } + break; } - break; } /* -- 2.24.0