[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2] xen/x86: guest_access: optimize raw_x_guest() for PV and HVM combinations


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>, Teddy Astie <teddy.astie@xxxxxxxxxx>
  • From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • Date: Thu, 6 Nov 2025 22:26:31 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZxB9zLfi6wuogk8pMMm18vfwQYKMZJW/VRp6Bnwao9o=; b=p4TDTZnKSm9Ls8/lgmGGUn9sRLSimlG0DAZqvJnB+Htg8QbTx5iIq2leBX5xZqhQPL58z6wfZOpJjUEXIb3gj5yjxeeok5LnhmttJgmmC0TDmk8sWdD2SPGaZZtvjvY9XkVi4Kr+VlfTtu+Ww9k3Lcu3D5iMGf3mI5maHZsVHlRT3TnBH3khxX2MFdWVDSNL7Jg9AMCAwNiS3tKk/ayg4yW7+j/H6xX/Z47d7M4ucwzeyoMabhfDpHiM1Fy7A1DxCZpcmKyCC8sRfrEWQHPBpFQW1yxn3YPy/eX7xHSNHidJefJBC4UKaph72wVS5dCKsyCWzrc6VJjgMIMx+YPzkg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=sLi1OIYpeKi9Rdq752RoM+Mt+WsPNTGNFnTtzkAuQzZrzZTW+t0iApMfqYvYfkhdI9iqe3nMEZO9mxfmYv8smaKcY5Ar8Ylr2cA3dVwKuAZbPBzl4M1l44T+r05BfPEexjfBY7CQ9AemkjcDRdFQ+jcUG11p9XYeStpyAWQe0EhyoJ8l34vjL3j2zQBhW9FKEb0fAJMkxosAsPzWvQ5CfoHd4er5QtLd6XMkEAmuuh+Pqsd6up+H5EMuNQjS4KXS6VxI4XNupmCLfitg/W89VQ4/FsCcyyZ1qGCr+4j7UZJTE3kS0e9B/q19V5hBMv7iu7higo8FAp7kKVgshmZqmA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Grygorii Strashko <grygorii_strashko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 06 Nov 2025 22:26:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHcT2xm+kC+B40bN02/oMva+AgRcg==
  • Thread-topic: [PATCH v2] xen/x86: guest_access: optimize raw_x_guest() for PV and HVM combinations

From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>

Xen uses below pattern for raw_x_guest() functions:

define raw_copy_to_guest(dst, src, len)        \
    (is_hvm_vcpu(current) ?                     \
     copy_to_user_hvm((dst), (src), (len)) :    \
     copy_to_guest_pv(dst, src, len))

This pattern works depending on CONFIG_PV/CONFIG_HVM as:
- PV=y and HVM=y
  Proper guest access function is selected depending on domain type.
- PV=y and HVM=n
  Only PV domains are possible. is_hvm_domain/vcpu() will constify to "false"
  and compiler will optimize code and skip HVM specific part.
- PV=n and HVM=y
  Only HVM domains are possible. is_hvm_domain/vcpu() will not be constified.
  No PV specific code will be optimized by compiler.
- PV=n and HVM=n
  No guests should possible. The code will still follow PV path.

Rework raw_x_guest() code to use static inline functions which account for
above PV/HVM possible configurations with main intention to optimize code
for (PV=n and HVM=y) case.

For the case (PV=n and HVM=n) return "len" value indicating a failure (no
guests should be possible in this case, which means no access to guest
memory should ever happen).

Finally build arch/x86/usercopy.c only for PV=y.

The measured (bloat-o-meter) improvement for (PV=n and HVM=y) case is:
  add/remove: 2/9 grow/shrink: 2/90 up/down: 1678/-32560 (-30882)
  Total: Before=1937092, After=1906210, chg -1.59%

Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
[teddy.astie@xxxxxxxxxx: Suggested to use static inline functions vs macro 
combinations]
Suggested-by: Teddy Astie <teddy.astie@xxxxxxxxxx>
---
changes in v2:
- use static inline functions instead of macro combinations

v1: 
https://patchwork.kernel.org/project/xen-devel/patch/20251031212058.1338332-1-grygorii_strashko@xxxxxxxx/

 xen/arch/x86/Makefile                   |  2 +-
 xen/arch/x86/include/asm/guest_access.h | 78 ++++++++++++++++++-------
 2 files changed, 59 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 407571c510e1..27f131ffeb61 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -71,7 +71,7 @@ obj-y += time.o
 obj-y += traps-setup.o
 obj-y += traps.o
 obj-$(CONFIG_INTEL) += tsx.o
-obj-y += usercopy.o
+obj-$(CONFIG_PV) += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
 obj-y += hpet.o
diff --git a/xen/arch/x86/include/asm/guest_access.h 
b/xen/arch/x86/include/asm/guest_access.h
index 69716c8b41bb..576eac9722e6 100644
--- a/xen/arch/x86/include/asm/guest_access.h
+++ b/xen/arch/x86/include/asm/guest_access.h
@@ -13,26 +13,64 @@
 #include <asm/hvm/guest_access.h>
 
 /* Raw access functions: no type checking. */
-#define raw_copy_to_guest(dst, src, len)        \
-    (is_hvm_vcpu(current) ?                     \
-     copy_to_user_hvm((dst), (src), (len)) :    \
-     copy_to_guest_pv(dst, src, len))
-#define raw_copy_from_guest(dst, src, len)      \
-    (is_hvm_vcpu(current) ?                     \
-     copy_from_user_hvm((dst), (src), (len)) :  \
-     copy_from_guest_pv(dst, src, len))
-#define raw_clear_guest(dst,  len)              \
-    (is_hvm_vcpu(current) ?                     \
-     clear_user_hvm((dst), (len)) :             \
-     clear_guest_pv(dst, len))
-#define __raw_copy_to_guest(dst, src, len)      \
-    (is_hvm_vcpu(current) ?                     \
-     copy_to_user_hvm((dst), (src), (len)) :    \
-     __copy_to_guest_pv(dst, src, len))
-#define __raw_copy_from_guest(dst, src, len)    \
-    (is_hvm_vcpu(current) ?                     \
-     copy_from_user_hvm((dst), (src), (len)) :  \
-     __copy_from_guest_pv(dst, src, len))
+static inline unsigned int raw_copy_to_guest(void *to, const void *src,
+                                             unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
+        return copy_to_user_hvm(to, src, len);
+    else if ( IS_ENABLED(CONFIG_PV) )
+        return copy_to_guest_pv(to, src, len);
+    else
+        return len;
+}
+
+static inline unsigned int raw_copy_from_guest(void *dst, const void *src,
+                                               unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
+        return copy_from_user_hvm(dst, src, len);
+    else if ( IS_ENABLED(CONFIG_PV) )
+        return copy_from_guest_pv(dst, src, len);
+    else
+        return len;
+}
+
+static inline unsigned int raw_clear_guest(void *dst, unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
+        return clear_user_hvm(dst, len);
+    else if ( IS_ENABLED(CONFIG_PV) )
+        return clear_guest_pv(dst, len);
+    else
+        return len;
+}
+
+static inline unsigned int __raw_copy_to_guest(void *dst, const void *src,
+                                               unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
+        return copy_to_user_hvm(dst, src, len);
+    else if ( IS_ENABLED(CONFIG_PV) )
+        return __copy_to_guest_pv(dst, src, len);
+    else
+        return len;
+}
+
+static inline unsigned int __raw_copy_from_guest(void *dst, const void *src,
+                                                 unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
+        return copy_from_user_hvm(dst, src, len);
+    else if ( IS_ENABLED(CONFIG_PV) )
+        return __copy_from_guest_pv(dst, src, len);
+    else
+        return len;
+}
 
 /*
  * Pre-validate a guest handle.
-- 
2.34.1



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.