[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] x86/amd: Avoid cpu_has_hypervisor evaluating true on native hardware
Currently when booting native on AMD hardware, cpuidmask_defaults._1cd gets configured with the HYPERVISOR bit before native CPUID is scanned for feature bits. This results in cpu_has_hypervisor becoming set as part of identify_cpu(), and ends up appearing in the raw and host CPU policies. A combination of this bug, and c/s bb502a8ca59 "x86: check feature flags after resume" which checks that feature bits don't go missing, results in broken S3 on AMD hardware. Alter amd_init_levelling() to exclude the HYPERVISOR bit from cpumask_defaults, and update domain_cpu_policy_changed() to allow it to be explicitly forwarded. This also fixes a bug on kexec, where the hypervisor bit is left enabled for the new kernel to find. These changes highlight a further but - dom0 construction is asymetric with domU construction, by not having any calls to domain_cpu_policy_changed(). Extend arch_domain_create() to always call domain_cpu_policy_changed(). Reported-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wl@xxxxxxx> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> CC: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> CC: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx> CC: Claudia <claudia1@xxxxxxxxxxx> v2: * Rewrite the commit message. No change to the patch content. Marek/Claudia: Do either of you want a Reported-by tag seeing as you found a brand new way that this was broken? --- xen/arch/x86/cpu/amd.c | 3 --- xen/arch/x86/domain.c | 2 ++ xen/arch/x86/domctl.c | 9 ++++++++- xen/include/asm-x86/domain.h | 2 ++ 4 files changed, 12 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c index e351dd227f..f95a8e0fd3 100644 --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -298,9 +298,6 @@ static void __init noinline amd_init_levelling(void) ecx |= cpufeat_mask(X86_FEATURE_OSXSAVE); edx |= cpufeat_mask(X86_FEATURE_APIC); - /* Allow the HYPERVISOR bit to be set via guest policy. */ - ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR); - cpuidmask_defaults._1cd = ((uint64_t)ecx << 32) | edx; } diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index f53ae5ff86..12bd554391 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -656,6 +656,8 @@ int arch_domain_create(struct domain *d, */ d->arch.x87_fip_width = cpu_has_fpu_sel ? 0 : 8; + domain_cpu_policy_changed(d); + return 0; fail: diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c index 4fa9c91140..ce76d6d776 100644 --- a/xen/arch/x86/domctl.c +++ b/xen/arch/x86/domctl.c @@ -48,7 +48,7 @@ static int gdbsx_guest_mem_io(domid_t domid, struct xen_domctl_gdbsx_memio *iop) } #endif -static void domain_cpu_policy_changed(struct domain *d) +void domain_cpu_policy_changed(struct domain *d) { const struct cpuid_policy *p = d->arch.cpuid; struct vcpu *v; @@ -106,6 +106,13 @@ static void domain_cpu_policy_changed(struct domain *d) ecx = 0; edx = cpufeat_mask(X86_FEATURE_APIC); + /* + * If the Hypervisor bit is set in the policy, we can also + * forward it into real CPUID. + */ + if ( p->basic.hypervisor ) + ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR); + mask |= ((uint64_t)ecx << 32) | edx; break; } diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index f0c25ffec0..1843c76d1a 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -624,6 +624,8 @@ struct guest_memory_policy void update_guest_memory_policy(struct vcpu *v, struct guest_memory_policy *policy); +void domain_cpu_policy_changed(struct domain *d); + bool update_runstate_area(struct vcpu *); bool update_secondary_system_time(struct vcpu *, struct vcpu_time_info *); -- 2.11.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |