[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN][PATCH] x86/hvm: move hvm_shadow_handle_cd() under CONFIG_INTEL_VMX ifdef


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • Date: Tue, 28 Oct 2025 14:43:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WTvnQvUBqwSqQJ7zORi1lY85P/9OGQ38YeFEP7zBOXE=; b=pTJwWptQUzsuSwfEcECZp/ehPF7tt06qrb0gZv7gk1KgKzgGy4FYl33Po/aDKjpkd43xI+Bok7BtWkf96TL/Y/ftBaYFGjEadiKNLhyf0Ms2fcFSG4k8TQCA7Pc8sD8M5e+PCUgKVKaDvb1LhPFi0KW+0F5kEDJb+PPXag5aavEFxqrsghCC53IkN/MfS2KdcXmtI7jt5LyxjYzcz4HY+1i/62udIwhOwC/3Jcif4Gs58iZajMQw1UJyL31iaq5/DSrCI3YGrdjJdIMN1YMoMRFW0ft6NYRVbX5zjT+Tu8sDkXHrou9TBW4qiHuSHmUt89bVxqanLXULuynk/N+Mbg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RYtZCIGUWjoqZOGAzplw9IHHRl7gyvW59rX0fZCq1pCVVghbNUByKIRfwae1eKaqvqmFYUUfSabyIkdYdcI7XsST5iGRPlx/dM4SCzmJibu4w3Ss8P0pbcERVUsgZsHwe1Q+xGIw15bCjFL+SFj2xW85ay6lV+8WsheTG0/v6U/WpTba9EnUpodTGMExNeYCtYXxsrVUsskWaPkas27zHgPlusy8de8lYkWdZXVtkzPFwCy2AW90mR/RHniWx3zGDBobXIq316XSulxEhpk5SryCBFIPa8EfqYtJSyucggm8lsooao5Rp6UiTDpJ1jV4u0lAnhYBnmibgAHMir+X+Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Jason Andryuk <Jason.Andryuk@xxxxxxx>
  • Delivery-date: Tue, 28 Oct 2025 12:46:07 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>



On 25.10.25 21:10, Teddy Astie wrote:
Le 23/10/2025 à 17:22, Grygorii Strashko a écrit :
From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>

Functions:
   hvm_shadow_handle_cd()
   hvm_set_uc_mode()
   domain_exit_uc_mode()
are used only by Intel VMX code, so move them under CONFIG_INTEL_VMX ifdef.


If they are actually Intel VMX specific, they should rather be moved to
VMX code (and named appropriately) rather than if-defed in shared hvm
code. If AMD code happens to need these functions in the future, it
would make things break pretty confusingly (as headers are not updated
consistently).

I agree and like it even better. Can try if there is no objections?

There is hvm_prepare_vm86_tss() also which is also used by VMX code only.


Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
---
   xen/arch/x86/hvm/hvm.c | 50 ++++++++++++++++++++++--------------------
   1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f1035fc9f645..3a30404d9940 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2168,30 +2168,6 @@ int hvm_set_efer(uint64_t value)
       return X86EMUL_OKAY;
   }
-/* Exit UC mode only if all VCPUs agree on MTRR/PAT and are not in no_fill. */
-static bool domain_exit_uc_mode(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    struct vcpu *vs;
-
-    for_each_vcpu ( d, vs )
-    {
-        if ( (vs == v) || !vs->is_initialised )
-            continue;
-        if ( (vs->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) ||
-             mtrr_pat_not_equal(vs, v) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static void hvm_set_uc_mode(struct vcpu *v, bool is_in_uc_mode)
-{
-    v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode;
-    shadow_blow_tables_per_domain(v->domain);
-}
-
   int hvm_mov_to_cr(unsigned int cr, unsigned int gpr)
   {
       struct vcpu *curr = current;
@@ -2273,6 +2249,31 @@ int hvm_mov_from_cr(unsigned int cr, unsigned int gpr)
       return X86EMUL_UNHANDLEABLE;
   }
+#ifdef CONFIG_INTEL_VMX
+/* Exit UC mode only if all VCPUs agree on MTRR/PAT and are not in no_fill. */
+static bool domain_exit_uc_mode(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct vcpu *vs;
+
+    for_each_vcpu ( d, vs )
+    {
+        if ( (vs == v) || !vs->is_initialised )
+            continue;
+        if ( (vs->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) ||
+             mtrr_pat_not_equal(vs, v) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static void hvm_set_uc_mode(struct vcpu *v, bool is_in_uc_mode)
+{
+    v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode;
+    shadow_blow_tables_per_domain(v->domain);
+}
+
   void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
   {
       if ( value & X86_CR0_CD )
@@ -2306,6 +2307,7 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long 
value)
           spin_unlock(&v->domain->arch.hvm.uc_lock);
       }
   }
+#endif
static void hvm_update_cr(struct vcpu *v, unsigned int cr, unsigned long value)
   {

--
Best regards,
-grygorii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.