[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2] xen/x86: guest_access: optimize raw_x_guest() for PV and HVM combinations
- To: Grygorii Strashko <grygorii_strashko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Jan Beulich" <jbeulich@xxxxxxxx>, Teddy Astie <teddy.astie@xxxxxxxxxx>
- From: Jason Andryuk <jason.andryuk@xxxxxxx>
- Date: Thu, 6 Nov 2025 20:06:13 -0500
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=epam.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MhsXQt5m9mycieh7FRhySFY2LkNvm4TT3dz9JkbhNCk=; b=xl45GDgx6o5p6AJh/kkLVRAJbE34g62IRLv0zRO7XgMVRxo1g8umVCXDoLX4NwOyFFu+k6e3f3eRJJW1EQKz7a1a7/VzRxwXfs43z+57zlJD6sCzIGinPY95Q0qSNSfOGjbTXnDL1VpJkpUVQEWwuY5pSuTGpwWeZvxKt+/iKL7MEhbbK5NxIITmowTgglknhDkPOP9yK6V/NFk/EAHNYDk3G1w1hbXOpiHBl7A1FEQmAmyIifo49ZKQ5qP9DUSut1bWJWOaZuagKb5b1IvV17GYJHpYaLReEWDABnyPTMn5LokGTs7siVvwIk9smMwgVOUu0qLm33YwH1FbNWc9Cw==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=GhU/E09krvwMrSoQ9lTtmffEoQul/kmEQY2kTFH/Fq7iaK8R5k6Cdg3wuzGtcdeG07YMV6ZzWb/6V6bWNcx8cYS7IgL+U9Gr2qHbuR2EKXQIQ34fDYZKFVJJ7sZPw+uTO8b50Qxtazzj3J5UgyEdGiEHy25WRIctLK/BTu8iOwJjSwWjCC/x17NhFgv9lNTl1VPSyds1njwZ8FLpgNk7+av9tvp1vxcj9ocANd8dGrmaS09Eu70oH511Vdrv/hxBv0oyzIazQJ6vOmgrF7jA/+SgOOM4a2oanmSzeCUJCzSY8OttlgE9N2MbQr9EJel+JtXRoZj/YWg8bbmhYQPofg==
- Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
- Delivery-date: Fri, 07 Nov 2025 01:06:46 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 2025-11-06 17:26, Grygorii Strashko wrote:
From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
Xen uses below pattern for raw_x_guest() functions:
define raw_copy_to_guest(dst, src, len) \
(is_hvm_vcpu(current) ? \
copy_to_user_hvm((dst), (src), (len)) : \
copy_to_guest_pv(dst, src, len))
This pattern works depending on CONFIG_PV/CONFIG_HVM as:
- PV=y and HVM=y
Proper guest access function is selected depending on domain type.
- PV=y and HVM=n
Only PV domains are possible. is_hvm_domain/vcpu() will constify to "false"
and compiler will optimize code and skip HVM specific part.
- PV=n and HVM=y
Only HVM domains are possible. is_hvm_domain/vcpu() will not be constified.
No PV specific code will be optimized by compiler.
- PV=n and HVM=n
No guests should possible. The code will still follow PV path.
Rework raw_x_guest() code to use static inline functions which account for
above PV/HVM possible configurations with main intention to optimize code
for (PV=n and HVM=y) case.
For the case (PV=n and HVM=n) return "len" value indicating a failure (no
guests should be possible in this case, which means no access to guest
memory should ever happen).
Finally build arch/x86/usercopy.c only for PV=y.
The measured (bloat-o-meter) improvement for (PV=n and HVM=y) case is:
add/remove: 2/9 grow/shrink: 2/90 up/down: 1678/-32560 (-30882)
Total: Before=1937092, After=1906210, chg -1.59%
Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
[teddy.astie@xxxxxxxxxx: Suggested to use static inline functions vs macro
combinations]
Suggested-by: Teddy Astie <teddy.astie@xxxxxxxxxx>
I think Teddy's goes before your SoB.
---
diff --git a/xen/arch/x86/include/asm/guest_access.h
b/xen/arch/x86/include/asm/guest_access.h
index 69716c8b41bb..576eac9722e6 100644
--- a/xen/arch/x86/include/asm/guest_access.h
+++ b/xen/arch/x86/include/asm/guest_access.h
@@ -13,26 +13,64 @@
#include <asm/hvm/guest_access.h>
/* Raw access functions: no type checking. */
-#define raw_copy_to_guest(dst, src, len) \
- (is_hvm_vcpu(current) ? \
- copy_to_user_hvm((dst), (src), (len)) : \
- copy_to_guest_pv(dst, src, len))
-#define raw_copy_from_guest(dst, src, len) \
- (is_hvm_vcpu(current) ? \
- copy_from_user_hvm((dst), (src), (len)) : \
- copy_from_guest_pv(dst, src, len))
-#define raw_clear_guest(dst, len) \
- (is_hvm_vcpu(current) ? \
- clear_user_hvm((dst), (len)) : \
- clear_guest_pv(dst, len))
-#define __raw_copy_to_guest(dst, src, len) \
- (is_hvm_vcpu(current) ? \
- copy_to_user_hvm((dst), (src), (len)) : \
- __copy_to_guest_pv(dst, src, len))
-#define __raw_copy_from_guest(dst, src, len) \
- (is_hvm_vcpu(current) ? \
- copy_from_user_hvm((dst), (src), (len)) : \
- __copy_from_guest_pv(dst, src, len))
+static inline unsigned int raw_copy_to_guest(void *to, const void *src,
Maybe s/to/dst/ to keep this consistent with the rest?
+ unsigned int len)
+{
+ if ( IS_ENABLED(CONFIG_HVM) &&
+ (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )
Since this is repeated, maybe put into a helper like
use_hvm_access(current)?
Thanks,
Jason
|