[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging-4.12] x86/msr: Fix handling of MSR_AMD_PATCHLEVEL/MSR_IA32_UCODE_REV



commit 3be3d9da4024c5155b293369d52e011efec931cb
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Fri Jul 19 16:09:30 2019 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Jul 19 16:09:30 2019 +0200

    x86/msr: Fix handling of MSR_AMD_PATCHLEVEL/MSR_IA32_UCODE_REV
    
    There are a number of bugs.  There are no read/write hooks on the HVM side, 
so
    guest accesses fall into the "read/write-discard" defaults, which bypass the
    correct faulting behaviour and the Intel special case.
    
    For the PV side, writes are discarded (again, bypassing proper faulting),
    except for a pinned dom0, which is permitted to actually write the values
    other than 0.  This is pointless with read hook implementing the Intel 
special
    case.
    
    However, implementing the Intel special case is itself pointless.  First of
    all, OS software can't guarentee to read back 0 in the first place, because 
a)
    this behaviour isn't guarenteed in the SDM, and b) there are SMM handlers
    which use the CPUID instruction.  Secondly, when a guest executes CPUID, 
this
    doesn't typically result in Xen executing a CPUID instruction in practice.
    
    With the dom0 special case removed, there are now no writes to this MSR 
other
    than Xen's microcode loading facilities, which means that the value held in
    the MSR will be properly up-to-date.  Forward it directly, without jumping
    through any hoops.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 013896cb8b2f070dc452bd1b91fc5b842a538367
    master date: 2019-04-05 11:09:08 +0100
---
 xen/arch/x86/msr.c             | 35 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 22 ----------------------
 2 files changed, 35 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index a7f67d97e6..56de0fe9e1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -135,6 +135,28 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, 
uint64_t *val)
         /* Not offered to guests. */
         goto gp_fault;
 
+    case MSR_AMD_PATCHLEVEL:
+        BUILD_BUG_ON(MSR_IA32_UCODE_REV != MSR_AMD_PATCHLEVEL);
+        /*
+         * AMD and Intel use the same MSR for the current microcode version.
+         *
+         * There is no need to jump through the SDM-provided hoops for Intel.
+         * A guest might itself perform the "write 0, CPUID, read" sequence,
+         * but servicing the CPUID for the guest typically wont result in
+         * actually executing a CPUID instruction.
+         *
+         * As a guest can't influence the value of this MSR, the value will be
+         * from Xen's last microcode load, which can be forwarded straight to
+         * the guest.
+         */
+        if ( (cp->x86_vendor != X86_VENDOR_INTEL &&
+              cp->x86_vendor != X86_VENDOR_AMD) ||
+             (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL &&
+              boot_cpu_data.x86_vendor != X86_VENDOR_AMD) ||
+             rdmsr_safe(MSR_AMD_PATCHLEVEL, *val) )
+            goto gp_fault;
+        break;
+
     case MSR_SPEC_CTRL:
         if ( !cp->feat.ibrsb )
             goto gp_fault;
@@ -241,6 +263,19 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         /* Not offered to guests. */
         goto gp_fault;
 
+    case MSR_AMD_PATCHLEVEL:
+        BUILD_BUG_ON(MSR_IA32_UCODE_REV != MSR_AMD_PATCHLEVEL);
+        /*
+         * AMD and Intel use the same MSR for the current microcode version.
+         *
+         * Both document it as read-only.  However Intel also document that,
+         * for backwards compatiblity, the OS should write 0 to it before
+         * trying to access the current microcode version.
+         */
+        if ( d->arch.cpuid->x86_vendor != X86_VENDOR_INTEL || val != 0 )
+            goto gp_fault;
+        break;
+
     case MSR_AMD_PATCHLOADER:
         /*
          * See note on MSR_IA32_UCODE_WRITE below, which may or may not apply
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 78b42340c5..8f93b4e9df 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -893,17 +893,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
         *val = 0;
         return X86EMUL_OKAY;
 
-    case MSR_IA32_UCODE_REV:
-        BUILD_BUG_ON(MSR_IA32_UCODE_REV != MSR_AMD_PATCHLEVEL);
-        if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
-        {
-            if ( wrmsr_safe(MSR_IA32_UCODE_REV, 0) )
-                break;
-            /* As documented in the SDM: Do a CPUID 1 here */
-            cpuid_eax(1);
-        }
-        goto normal;
-
     case MSR_FAM10H_MMIO_CONF_BASE:
         if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD ||
              boot_cpu_data.x86 < 0x10 || boot_cpu_data.x86 >= 0x17 )
@@ -1055,17 +1044,6 @@ static int write_msr(unsigned int reg, uint64_t val,
             return X86EMUL_OKAY;
         break;
 
-    case MSR_IA32_UCODE_REV:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
-            break;
-        if ( !is_hardware_domain(currd) || !is_pinned_vcpu(curr) )
-            return X86EMUL_OKAY;
-        if ( rdmsr_safe(reg, temp) )
-            break;
-        if ( val )
-            goto invalid;
-        return X86EMUL_OKAY;
-
     case MSR_IA32_MISC_ENABLE:
         rdmsrl(reg, temp);
         if ( val != guest_misc_enable(temp) )
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.12

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.