[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V5] vm_event: Allow subscribing to write events for specific MSR-s
Hi Razvan, On 26/04/2016 09:49, Razvan Cojocaru wrote: diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c index 2906407..1ba12cb 100644 --- a/xen/common/vm_event.c +++ b/xen/common/vm_event.c @@ -27,6 +27,7 @@ #include <xen/mem_access.h> #include <asm/p2m.h> #include <asm/altp2m.h> +#include <asm/monitor.h> #include <asm/vm_event.h> #include <xsm/xsm.h> @@ -665,6 +666,9 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec, { case XEN_VM_EVENT_ENABLE: /* domain_pause() not required here, see XSA-99 */ + rc = arch_monitor_init_domain(d); For x86, this function will allocate memory unconditionally. I cannot find anything which could prevent someone to call XEN_VM_EVENT_ENABLE twice. So Xen would leak memory for the previous allocation. + if ( rc ) + break; rc = vm_event_enable(d, vec, ved, _VPF_mem_access, HVM_PARAM_MONITOR_RING_PFN, monitor_notification); @@ -675,6 +679,7 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec, { domain_pause(d); rc = vm_event_disable(d, ved); + arch_monitor_cleanup_domain(d); domain_unpause(d); } break; diff --git a/xen/include/asm-arm/monitor.h b/xen/include/asm-arm/monitor.h index 6e36e99..57a9d91 100644 --- a/xen/include/asm-arm/monitor.h +++ b/xen/include/asm-arm/monitor.h @@ -46,4 +46,17 @@ int arch_monitor_domctl_event(struct domain *d, return -EOPNOTSUPP; } +static inline +int arch_monitor_init_domain(struct domain *d) +{ + /* No arch-specific domain initialization on ARM. */ + return -EOPNOTSUPP; Shouldn't we return 0 in this case? +} + +static inline +void arch_monitor_cleanup_domain(struct domain *d) +{ + /* No arch-specific domain cleanup on ARM. */ +} + #endif /* __ASM_ARM_MONITOR_H__ */ Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |