[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.11] x86/hvm: Fix bit checking for CR4 and MSR_EFER



commit 850ca94004e375b4f9e04a0a148ebad7685effcc
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Fri Feb 1 11:36:17 2019 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Feb 1 11:36:17 2019 +0100

    x86/hvm: Fix bit checking for CR4 and MSR_EFER
    
    Before the cpuid_policy logic came along, %cr4/EFER auditing on migrate-in 
was
    complicated, because at that point no CPUID information had been set for the
    guest.  Auditing against the host CPUID was better than nothing, but not
    ideal.
    
    Similarly at the time, PVHv1 lacked the "CPUID passed through from hardware"
    behaviour with PV guests had, and PVH dom0 had to be special-cased to be 
able
    to boot.
    
    Order of information in the migration stream is still an issue (hence we 
still
    need to keep the restore parameter to cope with a nested virt corner case 
for
    %cr4), but since Xen 4.9, all domains start with a suitable CPUID policy,
    which is a more appropriate upper bound than host_cpuid_policy.
    
    Finally, reposition the UMIP logic as it is the only row out of order.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    master commit: 9d8c1d1814b744d0fb41085463db5d8ae025607e
    master date: 2019-01-29 11:28:11 +0000
---
 xen/arch/x86/hvm/hvm.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2a34747a09..7a64368085 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -900,12 +900,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t 
value,
                            signed int cr0_pg)
 {
     const struct domain *d = v->domain;
-    const struct cpuid_policy *p;
-
-    if ( cr0_pg < 0 && !is_hardware_domain(d) )
-        p = d->arch.cpuid;
-    else
-        p = &host_cpuid_policy;
+    const struct cpuid_policy *p = d->arch.cpuid;
 
     if ( value & ~EFER_KNOWN_MASK )
         return "Unknown bits set";
@@ -945,14 +940,9 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t 
value,
 /* These bits in CR4 can be set by the guest. */
 unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore)
 {
-    const struct cpuid_policy *p;
+    const struct cpuid_policy *p = d->arch.cpuid;
     bool mce, vmxe;
 
-    if ( !restore && !is_hardware_domain(d) )
-        p = d->arch.cpuid;
-    else
-        p = &host_cpuid_policy;
-
     /* Logic broken out simply to aid readability below. */
     mce  = p->basic.mce || p->basic.mca;
     vmxe = p->basic.vmx && (restore || nestedhvm_enabled(d));
@@ -967,13 +957,13 @@ unsigned long hvm_cr4_guest_valid_bits(const struct 
domain *d, bool restore)
                                 X86_CR4_PCE                    |
             (p->basic.fxsr    ? X86_CR4_OSFXSR            : 0) |
             (p->basic.sse     ? X86_CR4_OSXMMEXCPT        : 0) |
+            (p->feat.umip     ? X86_CR4_UMIP              : 0) |
             (vmxe             ? X86_CR4_VMXE              : 0) |
             (p->feat.fsgsbase ? X86_CR4_FSGSBASE          : 0) |
             (p->basic.pcid    ? X86_CR4_PCIDE             : 0) |
             (p->basic.xsave   ? X86_CR4_OSXSAVE           : 0) |
             (p->feat.smep     ? X86_CR4_SMEP              : 0) |
             (p->feat.smap     ? X86_CR4_SMAP              : 0) |
-            (p->feat.umip     ? X86_CR4_UMIP              : 0) |
             (p->feat.pku      ? X86_CR4_PKE               : 0));
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.11

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.