|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 09/28] xen/vm_event: consolidate CONFIG_VM_EVENT
On 13.10.2025 12:15, Penny Zheng wrote:
> File hvm/vm_event.c and x86/vm_event.c are the extend to vm_event handling
> routines, and its compilation shall be guarded by CONFIG_VM_EVENT too.
> Futhermore, features about monitor_op and memory access are both based on
> vm event subsystem, so monitor.o/mem_access.o shall be wrapped under
> CONFIG_VM_EVENT.
>
> Although CONFIG_VM_EVENT is right now forcibly enabled on x86 via
> MEM_ACCESS_ALWAYS_ON, we could disable it through disabling
> CONFIG_MGMT_HYPERCALLS later. So we remove MEM_ACCESS_ALWAYS_ON and
> make VM_EVENT=y on default only on x86 to retain the same.
>
> In consequence, a few switch-blocks need in-place stubs in do_altp2m_op()
> to pass compilation when ALTP2M=y and VM_EVENT=n(, hence MEM_ACCESS=n), like
> HVMOP_altp2m_set_mem_access, etc.
> And the following functions still require stubs to pass compilation:
> - vm_event_check_ring()
> - p2m_mem_access_check()
> - xenmem_access_to_p2m_access()
>
> The following functions are developed on the basis of vm event framework, or
> only invoked by vm_event.c/monitor.c/mem_access.c, so they all shall be
> wrapped with CONFIG_VM_EVENT (otherwise they will become unreachable and
> violate Misra rule 2.1 when VM_EVENT=n):
> - hvm_toggle_singlestep
> - hvm_fast_singlestep
> - hvm_enable_msr_interception
> - hvm_function_table.enable_msr_interception
> - hvm_has_set_descriptor_access_existing
> - hvm_function_table.set_descriptor_access_existing
> - arch_monitor_domctl_op
> - arch_monitor_allow_userspace
> - arch_monitor_get_capabilities
> - hvm_emulate_one_vm_event
> -
> hvmemul_write{,cmpxchg,rep_ins,rep_outs,rep_movs,rep_stos,read_io,write_io}_discard
>
> Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx>
Overall I agree with Grygorii's remark towards this preferably wanting (a)
splitting
off and perhaps (b) also splitting up some. If at all possible, of course.
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -50,6 +50,7 @@
> #include <asm/hvm/vm_event.h>
> #include <asm/hvm/vpt.h>
> #include <asm/i387.h>
> +#include <asm/mem_access.h>
> #include <asm/mc146818rtc.h>
> #include <asm/mce.h>
> #include <asm/monitor.h>
> @@ -4861,15 +4862,20 @@ static int do_altp2m_op(
> break;
>
> case HVMOP_altp2m_set_mem_access:
> +#ifdef CONFIG_VM_EVENT
> if ( a.u.mem_access.pad )
> rc = -EINVAL;
> else
> rc = p2m_set_mem_access(d, _gfn(a.u.mem_access.gfn), 1, 0, 0,
> a.u.mem_access.access,
> a.u.mem_access.view);
> +#else
> + rc = -EOPNOTSUPP;
> +#endif
> break;
I think this (and if possible the others below here) would better use
IS_ENABLED(). (Would also shrink the diff.)
> @@ -5030,6 +5043,7 @@ static int compat_altp2m_op(
> switch ( a.cmd )
> {
> case HVMOP_altp2m_set_mem_access_multi:
> +#ifdef CONFIG_VM_EVENT
> #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list(_d_, _s_); \
> guest_from_compat_handle((_d_)->pfn_list, (_s_)->pfn_list)
> #define XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list(_d_, _s_); \
> @@ -5038,6 +5052,7 @@ static int compat_altp2m_op(
> &a.u.set_mem_access_multi);
> #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_pfn_list
> #undef XLAT_hvm_altp2m_set_mem_access_multi_HNDL_access_list
> +#endif
> break;
>
> default:
> @@ -5056,6 +5071,7 @@ static int compat_altp2m_op(
> switch ( a.cmd )
> {
> case HVMOP_altp2m_set_mem_access_multi:
> +#ifdef CONFIG_VM_EVENT
> if ( rc == -ERESTART )
> {
> a.u.set_mem_access_multi.opaque =
> @@ -5065,6 +5081,9 @@ static int compat_altp2m_op(
> &a, u.set_mem_access_multi.opaque) )
> rc = -EFAULT;
> }
> +#else
> + rc = -EOPNOTSUPP;
> +#endif
> break;
>
> default:
Are these changes really needed?
> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -192,7 +192,10 @@ struct hvm_function_table {
> void (*handle_cd)(struct vcpu *v, unsigned long value);
> void (*set_info_guest)(struct vcpu *v);
> void (*set_rdtsc_exiting)(struct vcpu *v, bool enable);
> +#ifdef CONFIG_VM_EVENT
> void (*set_descriptor_access_exiting)(struct vcpu *v, bool enable);
> + void (*enable_msr_interception)(struct domain *d, uint32_t msr);
> +#endif
>
> /* Nested HVM */
> int (*nhvm_vcpu_initialise)(struct vcpu *v);
Another blank line ahead of the #ifdef?
> @@ -433,10 +434,12 @@ static inline bool using_svm(void)
>
> #define hvm_long_mode_active(v) (!!((v)->arch.hvm.guest_efer & EFER_LMA))
>
> +#ifdef CONFIG_VM_EVENT
> static inline bool hvm_has_set_descriptor_access_exiting(void)
> {
> return hvm_funcs.set_descriptor_access_exiting;
> }
> +#endif
>
> static inline void hvm_domain_creation_finished(struct domain *d)
> {
> @@ -679,10 +682,12 @@ static inline int nhvm_hap_walk_L1_p2m(
> v, L2_gpa, L1_gpa, page_order, p2m_acc, npfec);
> }
>
> +#ifdef CONFIG_VM_EVENT
> static inline void hvm_enable_msr_interception(struct domain *d, uint32_t
> msr)
> {
> alternative_vcall(hvm_funcs.enable_msr_interception, d, msr);
> }
> +#endif
Move this up into the earlier #ifdef?
> --- a/xen/arch/x86/include/asm/mem_access.h
> +++ b/xen/arch/x86/include/asm/mem_access.h
> @@ -14,6 +14,7 @@
> #ifndef __ASM_X86_MEM_ACCESS_H__
> #define __ASM_X86_MEM_ACCESS_H__
>
> +#ifdef CONFIG_VM_EVENT
> /*
> * Setup vm_event request based on the access (gla is -1ull if not
> available).
> * Handles the rw2rx conversion. Boolean return value indicates if event type
> @@ -25,6 +26,14 @@
> bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> struct npfec npfec,
> struct vm_event_st **req_ptr);
> +#else
> +static inline bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
> + struct npfec npfec,
> + struct vm_event_st **req_ptr)
> +{
> + return false;
Leaving *req_ptr untouched feels dangerous; the fact that the sole caller has
what it uses set to NULL up front is secondary.
>From looking at the function it's also not quite clear to me whether "false" is
the correct return value here. Tamas?
> --- a/xen/arch/x86/include/asm/monitor.h
> +++ b/xen/arch/x86/include/asm/monitor.h
> @@ -32,6 +32,7 @@ struct monitor_msr_bitmap {
> DECLARE_BITMAP(high, 8192);
> };
>
> +#ifdef COMFIG_VM_EVENT
Typo aside, isn't the entire file (perhaps minus some stubs) useful only when
VM_EVENT=y?
> --- a/xen/include/xen/mem_access.h
> +++ b/xen/include/xen/mem_access.h
> @@ -74,9 +74,19 @@ typedef enum {
> } p2m_access_t;
>
> struct p2m_domain;
> +#ifdef CONFIG_VM_EVENT
> bool xenmem_access_to_p2m_access(const struct p2m_domain *p2m,
> xenmem_access_t xaccess,
> p2m_access_t *paccess);
> +#else
> +static inline bool xenmem_access_to_p2m_access(const struct p2m_domain *p2m,
> + xenmem_access_t xaccess,
> + p2m_access_t *paccess)
> +{
> + *paccess = p2m_access_rwx;
Why not p2m->default_access, as the full function has it? And should xaccess
other than XENMEM_access_default be rejected, by returning false? (In turn I
wonder whether the real function may not want to move elsewhere, so that a
stub open-coding part of it wouldn't be necessary.)
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |