[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 1/3] x86/svm: split svm_intercept_msr() into svm_{set,clear}_msr_intercept()


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
  • Date: Mon, 24 Apr 2023 11:20:36 +0300
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XWb82N2r/vIWJiXtFNEFWs/ZcXzW3uEgMOIb6Mqkk14=; b=e1RCL6otz0kmLETyVCNUj3QS6YSYU1Ohk0SblkS/M1n8AQ7xBt3leEaxPil/okBC1EATWo1xAZEuc9ZlxDqYP62SBxhG/rOPPOw/ljhzVBoVHLHTOfRcUVxlqBRKUVkOpDMfNagFWPGt/S1e6Y5XjzSzwUsxzatJFZicnYWwwlHpgIcoi5h4tvWvWzJ7Qawj/SexIDQijRipYBsmc9BHm6zfk8e1U7Q2rqOF3zMMaWv/XS1zb/La+aamlTpE61Yyn4EGQTggS6M8VArB6zmtlj/VpBSb1poeA+3e09D+hh23wwiFRnH6hQ3jWCrODj7uWq1ticKgBz6d2JRmPlUXzQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KZipKmYEDaWvtt/tucRuUwLiBzI025dBtpTsrwXikUSP5avmUe4hKJzs1G2LldqXap6XO9ZeqMiO1OwzonNaNXx5c/fX4JHS+Smg1qm+j+ZxQLUNDxm17WjwXZFXMlcRn7DSqgBEgF+u7tIw9uklx+7jum+8ef88lMXtFozD6UJRLr2suxM0fBhYbklllDpIdc/5koNc/CiuwpbbpDe5tu+ui8eOqvCuG2pTIJpyUpjt3xXtQ+2D+kTGM9TKSubkUPx+42vlYW9Rft7adYtvAeH7bo67asfESQaxvgcHgCYYsbxPE05TFwKSWjNlQuSAXaYNOtMuRqnupTjmNy39Tg==
  • Cc: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 24 Apr 2023 08:21:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

This change aims to render the control interface of MSR intercepts identical
between SVM and VMX code, so that the control of the MSR intercept in common
code can be done through an hvm_funcs callback.

Create two new functions:
- svm_set_msr_intercept(), enables interception of read/write accesses to the
  corresponding MSR, by setting the corresponding read/write bits in the MSRPM
  based on the flags
- svm_clear_msr_intercept(), disables interception of read/write accesses to
  the corresponding MSR, by clearing the corresponding read/write bits in the
  MSRPM based on the flags

More specifically:
- if flag is MSR_R, the functions {set,clear} the MSRPM bit that controls read
  access to the MSR
- if flag is MSR_W, the functions {set,clear} the MSRPM bit that controls write
  access to the MSR
- if flag is MSR_RW, the functions {set,clear} both MSRPM bits

Place the definitions of the flags in asm/hvm/hvm.h because there is the
intention to be used by VMX code as well.

Remove svm_intercept_msr() and MSR_INTERCEPT_* definitions, and use the new
functions and flags instead.

The macros svm_{en,dis}able_intercept_for_msr() will be retained for now but
they will be eventually open-coded with a follow-up patch, because only one
of them is actually used, and because the meaning of "enabling/disabling"
msr intercepts is not consistent through the code(for instance the hvm_func
enable_msr_interception() sets only the write MSRPM bit, not both).
In the meantime, take the opportunity to remove excess parentheses.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
---

Changes in v2:
  - restore BUG_ON(), reported by Jan
  - coding style fixes, reported by Jan
  - remove excess parentheses from macros, suggested by Jan
  - change from int to unsigned int the type of param flags, reported by Jan
  - change from uint32_t to unsigned int the type of param msr, reported by Jan

 xen/arch/x86/cpu/vpmu_amd.c             |  9 +--
 xen/arch/x86/hvm/svm/svm.c              | 74 ++++++++++++++++---------
 xen/arch/x86/include/asm/hvm/hvm.h      |  4 ++
 xen/arch/x86/include/asm/hvm/svm/vmcb.h | 15 ++---
 4 files changed, 64 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index 18266b9521..da8e906972 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -154,8 +154,9 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+        svm_clear_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_W);
+        svm_clear_msr_intercept(v, ctrls[i], MSR_R);
     }
 
     msr_bitmap_on(vpmu);
@@ -168,8 +169,8 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 
     for ( i = 0; i < num_counters; i++ )
     {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, counters[i], MSR_RW);
+        svm_set_msr_intercept(v, ctrls[i], MSR_RW);
     }
 
     msr_bitmap_off(vpmu);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 59a6e88dff..3ee0805ff3 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -277,23 +277,33 @@ svm_msrbit(unsigned long *msr_bitmap, uint32_t msr)
     return msr_bit;
 }
 
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
+void svm_set_msr_intercept(struct vcpu *v, unsigned int msr, unsigned int 
flags)
 {
-    unsigned long *msr_bit;
-    const struct domain *d = v->domain;
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
 
-    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
     BUG_ON(msr_bit == NULL);
+
     msr &= 0x1fff;
 
-    if ( flags & MSR_INTERCEPT_READ )
+    if ( flags & MSR_R )
          __set_bit(msr * 2, msr_bit);
-    else if ( !monitored_msr(d, msr) )
-         __clear_bit(msr * 2, msr_bit);
-
-    if ( flags & MSR_INTERCEPT_WRITE )
+    if ( flags & MSR_W )
         __set_bit(msr * 2 + 1, msr_bit);
-    else if ( !monitored_msr(d, msr) )
+}
+
+void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             unsigned int flags)
+{
+    unsigned long *msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
+
+    BUG_ON(msr_bit == NULL);
+
+    if ( monitored_msr(v->domain, msr) )
+        return;
+
+    if ( flags & MSR_R )
+        __clear_bit(msr * 2, msr_bit);
+    if ( flags & MSR_W )
         __clear_bit(msr * 2 + 1, msr_bit);
 }
 
@@ -302,7 +312,10 @@ static void cf_check svm_enable_msr_interception(struct 
domain *d, uint32_t msr)
     struct vcpu *v;
 
     for_each_vcpu ( d, v )
-        svm_intercept_msr(v, msr, MSR_INTERCEPT_WRITE);
+    {
+        svm_set_msr_intercept(v, msr, MSR_W);
+        svm_clear_msr_intercept(v, msr, MSR_R);
+    }
 }
 
 static void svm_save_dr(struct vcpu *v)
@@ -319,10 +332,10 @@ static void svm_save_dr(struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_set_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -350,10 +363,10 @@ static void __restore_debug_registers(struct vmcb_struct 
*vmcb, struct vcpu *v)
 
     if ( v->domain->arch.cpuid->extd.dbext )
     {
-        svm_intercept_msr(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR0_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR1_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_RW);
+        svm_clear_msr_intercept(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_RW);
 
         wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.msrs->dr_mask[0]);
         wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.msrs->dr_mask[1]);
@@ -584,22 +597,29 @@ static void cf_check svm_cpuid_policy_changed(struct vcpu 
*v)
     vmcb_set_exception_intercepts(vmcb, bitmap);
 
     /* Give access to MSR_SPEC_CTRL if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_SPEC_CTRL,
-                      cp->extd.ibrs ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibrs )
+        svm_clear_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_SPEC_CTRL, MSR_RW);
 
     /*
      * Always trap write accesses to VIRT_SPEC_CTRL in order to cache the guest
      * setting and avoid having to perform a rdmsr on vmexit to get the guest
      * setting even if VIRT_SSBD is offered to Xen itself.
      */
-    svm_intercept_msr(v, MSR_VIRT_SPEC_CTRL,
-                      cp->extd.virt_ssbd && cpu_has_virt_ssbd &&
-                      !cpu_has_amd_ssbd ?
-                      MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW);
+    if ( cp->extd.virt_ssbd && cpu_has_virt_ssbd && !cpu_has_amd_ssbd )
+    {
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_W);
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_R);
+    }
+    else
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
 
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
-    svm_intercept_msr(v, MSR_PRED_CMD,
-                      cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+    if ( cp->extd.ibpb )
+        svm_clear_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
+    else
+        svm_set_msr_intercept(v, MSR_VIRT_SPEC_CTRL, MSR_RW);
 }
 
 void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h 
b/xen/arch/x86/include/asm/hvm/hvm.h
index 04cbd4ff24..5740a64281 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -250,6 +250,10 @@ extern struct hvm_function_table hvm_funcs;
 extern bool_t hvm_enabled;
 extern s8 hvm_port80_allowed;
 
+#define MSR_R       BIT(0, U)
+#define MSR_W       BIT(1, U)
+#define MSR_RW      (MSR_W | MSR_R)
+
 extern const struct hvm_function_table *start_svm(void);
 extern const struct hvm_function_table *start_vmx(void);
 
diff --git a/xen/arch/x86/include/asm/hvm/svm/vmcb.h 
b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
index a1a8a7fd25..94deb0a236 100644
--- a/xen/arch/x86/include/asm/hvm/svm/vmcb.h
+++ b/xen/arch/x86/include/asm/hvm/svm/vmcb.h
@@ -603,13 +603,14 @@ void svm_destroy_vmcb(struct vcpu *v);
 
 void setup_vmcb_dump(void);
 
-#define MSR_INTERCEPT_NONE    0
-#define MSR_INTERCEPT_READ    1
-#define MSR_INTERCEPT_WRITE   2
-#define MSR_INTERCEPT_RW      (MSR_INTERCEPT_WRITE | MSR_INTERCEPT_READ)
-void svm_intercept_msr(struct vcpu *v, uint32_t msr, int enable);
-#define svm_disable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), 
MSR_INTERCEPT_NONE)
-#define svm_enable_intercept_for_msr(v, msr) svm_intercept_msr((v), (msr), 
MSR_INTERCEPT_RW)
+void svm_set_msr_intercept(struct vcpu *v, unsigned int msr,
+                           unsigned int flags);
+void svm_clear_msr_intercept(struct vcpu *v, unsigned int msr,
+                             unsigned int flags);
+#define svm_disable_intercept_for_msr(v, msr) \
+    svm_clear_msr_intercept(v, msr, MSR_RW)
+#define svm_enable_intercept_for_msr(v, msr) \
+    svm_set_intercept_msr(v, msr, MSR_RW)
 
 /*
  * VMCB accessor functions.
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.