[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN][PATCH] xen/x86: guest_access: optimize raw_x_guest() for PV and HVM combinations


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • Date: Thu, 6 Nov 2025 19:40:03 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bTQ2226jWisS5m7AmTVSJ+r9i8KqXvixAE71IYtDgAE=; b=uXfBnMf2BKpk/11RF69Qp/4EIIK+bC00UZpBfcKqYTOtkpmDRt4MGJXeKnBlCzAAq9IoxA4MP2XR8eL7AFwnTimoj0R4Wc8/K22XrO7EsWcyUpybqnmflMlwVmgjBDHTXtFlg/dhGKUBoAV6oU8YsobP1pYtMq6O1bfuRqfwx4QO5y/SEbeqDA7SDXAldx7V+HZlhqVdbw8aOVH2C3sdSf6TVaYAz08LDBfoYcxnky/f6/kIwlPDEUquaEHfLzpuvwko813LSNQ6+8qooGoLlTld90F0F4eKFhkjL0VI4yxO5O46WIlohBWYvnNfyy+PM896GyUyT/3tHt8+stRK+A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DKUO/woZbew5nZKYwjogJ3CImFEfLvCE4NNFiygZ1zeAhFQnYghk4pScRvq0/yp+4rE3YwmqpeFTwap52vq4KTUSYsCdA8/lOazCy8NO0Y791P/Yxf3zfM8zSprHEKLsO0KzXoeJ9L1nCoMmvqYZsCZvX9YbfOpsaZWENsRF1ggjyWyeYRnxUKa3UNqyivua6odYFvt/j4JyzNPLPEUWsp4HCvp5cqLfwKyZdZ+mZiZoKacOwK7veB7eEOD0B8NIluWArfSfLyVmxRChHzF3TRSJ8yyIUQURo/KNIHvekGtA1lM7H4xXYWpVdlwkHNa/CN3ydpf/V//JEvCZyoTc5g==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
  • Delivery-date: Thu, 06 Nov 2025 17:40:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>



On 06.11.25 19:27, Teddy Astie wrote:
Le 06/11/2025 à 18:00, Jason Andryuk a écrit :
On 2025-11-06 11:33, Grygorii Strashko wrote:
Hi Teddy, Jan,

On 06.11.25 17:57, Teddy Astie wrote:
Le 31/10/2025 à 22:25, Grygorii Strashko a écrit :
Can try.

Yes, I was thinking something like Teddy suggested:

#define raw_copy_to_guest(dst, src, len)        \
          (is_hvm_vcpu(current) ? copy_to_user_hvm(dst, src, len) :
           is_pv_vcpu(current) ? copy_to_guest_pv(dst, src, len) :
           fail_copy(dst, src, len))

But that made the think the is_{hvm,pv}_{vcpu,domain}() could be
optimized for when only 1 of HVM or PV is enabled.

Regards,
Jason

xen: Optimize is_hvm/pv_domain() for single domain type

is_hvm_domain() and is_pv_domain() hardcode the false conditions for
HVM=n and PV=n, but they still leave the XEN_DOMCTL_CDF_hvm flag
checking.  When only one of PV or HVM is set, the result can be hard
coded since the other is impossible.  Notably, this removes the
evaluate_nospec() lfences.

Signed-off-by: Jason Andryuk <jason.andryuk@xxxxxxx>
---
Untested.

HVM=y PV=n bloat-o-meter:

add/remove: 3/6 grow/shrink: 19/212 up/down: 3060/-60349 (-57289)

Full bloat-o-meter below.
---
   xen/include/xen/sched.h | 18 ++++++++++++++----
   1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index f680fb4fa1..12f10d9cc8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1176,8 +1176,13 @@ static always_inline bool
is_hypercall_target(const struct domain *d)

   static always_inline bool is_pv_domain(const struct domain *d)
   {
-    return IS_ENABLED(CONFIG_PV) &&
-        evaluate_nospec(!(d->options & XEN_DOMCTL_CDF_hvm));
+    if ( !IS_ENABLED(CONFIG_PV) )
+        return false;
+
+    if ( IS_ENABLED(CONFIG_HVM) )
+        return evaluate_nospec(!(d->options & XEN_DOMCTL_CDF_hvm));
+
+    return true;
   }

   static always_inline bool is_pv_vcpu(const struct vcpu *v)
@@ -1218,8 +1223,13 @@ static always_inline bool is_pv_64bit_vcpu(const
struct vcpu *v)

   static always_inline bool is_hvm_domain(const struct domain *d)
   {
-    return IS_ENABLED(CONFIG_HVM) &&
-        evaluate_nospec(d->options & XEN_DOMCTL_CDF_hvm);
+    if ( !IS_ENABLED(CONFIG_HVM) )
+        return false;
+
+    if ( IS_ENABLED(CONFIG_PV) )
+        return evaluate_nospec(d->options & XEN_DOMCTL_CDF_hvm);
+
+    return true;
   }

   static always_inline bool is_hvm_vcpu(const struct vcpu *v)

While I like the idea, it may slightly impact some logic as special
domains (dom_xen and dom_io) are now considered HVM domains (when !PV &&
HVM) instead of "neither PV nor HVM".
We want at least to make sure we're not silently breaking something
elsewhere.

first of all idle domain - I've tried to constify is_hvm_domain() and even made 
it work,
but diff is very fragile.

Diff below - just FYI.

--
Best regards,
-grygorii

Author: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
Date:   Fri Oct 17 17:21:29 2025 +0300

    HACK: hvm only
Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index d65c2bd3661f..2ea3d81670de 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -567,17 +567,17 @@ int arch_vcpu_create(struct vcpu *v)
spin_lock_init(&v->arch.vpmu.vpmu_lock); - if ( is_hvm_domain(d) )
-        rc = hvm_vcpu_initialise(v);
-    else if ( !is_idle_domain(d) )
-        rc = pv_vcpu_initialise(v);
-    else
+    if ( is_idle_domain(d) )
     {
         /* Idle domain */
         v->arch.cr3 = __pa(idle_pg_table);
         rc = 0;
         v->arch.msrs = ZERO_BLOCK_PTR; /* Catch stray misuses */
     }
+    else if ( is_hvm_domain(d) )
+        rc = hvm_vcpu_initialise(v);
+    else
+        rc = pv_vcpu_initialise(v);
if ( rc )
         goto fail;
@@ -2123,7 +2123,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     vpmu_switch_from(prev);
     np2m_schedule(NP2M_SCHEDLE_OUT);
- if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm.tm_list) )
+    if ( !is_idle_domain(prevd) && is_hvm_domain(prevd) && 
!list_empty(&prev->arch.hvm.tm_list) )
         pt_save_timer(prev);
local_irq_disable();
diff --git a/xen/arch/x86/hvm/Kconfig b/xen/arch/x86/hvm/Kconfig
index 79c5bcbb3a24..533ad71d1018 100644
--- a/xen/arch/x86/hvm/Kconfig
+++ b/xen/arch/x86/hvm/Kconfig
@@ -126,4 +126,8 @@ config VHPET
If unsure, say Y. +config HVM_ONLY
+    bool "Only HVM/PVH"
+    default y
+
 endif
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 839d3ff91b5a..e3c9b4ffba52 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -236,7 +236,7 @@ static void cf_check vmcb_dump(unsigned char ch)
for_each_domain ( d )
     {
-        if ( !is_hvm_domain(d) )
+        if ( is_idle_domain(d) || !is_hvm_domain(d) )
             continue;
         printk("\n>>> Domain %d <<<\n", d->domain_id);
         for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index e126fda26760..c53269b3c06d 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -34,7 +34,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain 
*p2m)
     p2m->default_access = p2m_access_rwx;
     p2m->p2m_class = p2m_host;
- if ( !is_hvm_domain(d) )
+    if ( is_idle_domain(d) || !is_hvm_domain(d) )
         return 0;
p2m_pod_init(p2m);
@@ -113,7 +113,7 @@ int p2m_init(struct domain *d)
     int rc;
rc = p2m_init_hostp2m(d);
-    if ( rc || !is_hvm_domain(d) )
+    if ( rc || is_idle_domain(d) || !is_hvm_domain(d) )
         return rc;
/*
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 05633fe2ac88..4e62d98861fe 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1425,7 +1425,7 @@ bool p2m_pod_active(const struct domain *d)
     struct p2m_domain *p2m;
     bool res;
- if ( !is_hvm_domain(d) )
+    if ( is_idle_domain(d) || !is_hvm_domain(d) )
         return false;
p2m = p2m_get_hostp2m(d);
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index ccf8563e5a64..e1862c5085f5 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2158,7 +2158,7 @@ static int __hwdom_init cf_check io_bitmap_cb(
void __hwdom_init setup_io_bitmap(struct domain *d)
 {
-    if ( !is_hvm_domain(d) )
+    if ( is_idle_domain(d) || !is_hvm_domain(d) )
         return;
bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3764b58c9ccf..b1fb67b35d0f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1214,8 +1214,8 @@ static always_inline bool is_pv_64bit_vcpu(const struct 
vcpu *v)
static always_inline bool is_hvm_domain(const struct domain *d)
 {
-    return IS_ENABLED(CONFIG_HVM) &&
-        evaluate_nospec(d->options & XEN_DOMCTL_CDF_hvm);
+    return IS_ENABLED(CONFIG_HVM_ONLY) || (IS_ENABLED(CONFIG_HVM) &&
+        evaluate_nospec(d->options & XEN_DOMCTL_CDF_hvm));
 }
static always_inline bool is_hvm_vcpu(const struct vcpu *v)





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.