[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 09 October 2017 14:06 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu > <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Ian > Jackson <Ian.Jackson@xxxxxxxxxx>; Stefano Stabellini > <sstabellini@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Konrad Rzeszutek > Wilk <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx> > Subject: Re: [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to > acquire guest resources > > >>> On 06.10.17 at 14:25, <paul.durrant@xxxxxxxxxx> wrote: > > @@ -395,6 +397,39 @@ int compat_memory_op(unsigned int cmd, > XEN_GUEST_HANDLE_PARAM(void) compat) > > } > > #endif > > > > + case XENMEM_acquire_resource: > > + { > > + xen_ulong_t *gmfn_list = (xen_ulong_t *)(nat.mar + 1); > > + > > + if ( copy_from_guest(&cmp.mar, compat, 1) || > > + !compat_handle_okay(cmp.mar.gmfn_list, > > + cmp.mar.nr_frames) ) > > + return -EFAULT; > > + > > + if ( sizeof(*gmfn_list) * cmp.mar.nr_frames > > > + COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar) ) > > + return -E2BIG; > > With the actual handler accepting no more than 2 entries this is > certainly good enough for now, but since larger arrays could be > handled here perhaps a comment would be helpful to clarify > why this is sufficient atm. Ok. > > > @@ -535,6 +570,23 @@ int compat_memory_op(unsigned int cmd, > XEN_GUEST_HANDLE_PARAM(void) compat) > > rc = -EFAULT; > > break; > > > > + case XENMEM_acquire_resource: > > + { > > + xen_ulong_t *gmfn_list = (xen_ulong_t *)(nat.mar + 1); > > + > > + for ( i = 0; i < cmp.mar.nr_frames; i++ ) > > + { > > + compat_ulong_t gmfn = gmfn_list[i]; > > + > > + if ( gmfn != gmfn_list[i] ) > > + return -ERANGE; > > + > > + if ( __copy_to_compat_offset(cmp.mar.gmfn_list, i, > > + &gmfn, 1) ) > > + return -EFAULT; > > While I consider it acceptable to leave kind of inconsistent state in > this latter case (as it's under guest control to avoid the situation), > I'm not sure the -ERANGE return above wouldn't better be > accompanied by undoing the operation. Undoing in both cases > would become imperative once set_foreign_p2m_entry() acquires > proper page references. Ok. I'll try to make sure things get left in a consistent state. > > > --- a/xen/common/memory.c > > +++ b/xen/common/memory.c > > @@ -965,6 +965,67 @@ static long xatp_permission_check(struct domain > *d, unsigned int space) > > return xsm_add_to_physmap(XSM_TARGET, current->domain, d); > > } > > > > +#ifdef CONFIG_X86 > > Could you try to (a) have only a single such #ifdef and (b) reduce > its scope as much as possible? Even if for now the code is dead on > ARM, making sure it continues to compile there would be helpful. > Ok. I don't have an arm machine to build on so I can't tell if anything is broken though. > > +static int acquire_resource(const xen_mem_acquire_resource_t *xmar) > > +{ > > + struct domain *d, *currd = current->domain; > > + unsigned long mfn_list[2]; > > + int rc; > > + > > + if ( xmar->nr_frames == 0 || xmar->pad != 0 ) > > + return -EINVAL; > > + > > + if ( xmar->nr_frames > ARRAY_SIZE(mfn_list) ) > > + return -E2BIG; > > + > > + d = rcu_lock_domain_by_any_id(xmar->domid); > > + if ( d == NULL ) > > + return -ESRCH; > > + > > + rc = xsm_domain_memory_map(XSM_TARGET, d); > > + if ( rc ) > > + goto out; > > + > > + switch ( xmar->type ) > > + { > > + default: > > + rc = -EOPNOTSUPP; > > + break; > > + } > > + > > + if ( rc ) > > + goto out; > > + > > + if ( !paging_mode_translate(currd) ) > > + { > > + if ( copy_to_guest_offset(xmar->gmfn_list, 0, mfn_list, > > + xmar->nr_frames) ) > > Just copy_to_guest()? > Ok, if that is preferable. > > + rc = -EFAULT; > > + } > > + else > > + { > > + unsigned int i; > > + > > + for ( i = 0; i < xmar->nr_frames; i++ ) > > + { > > + xen_pfn_t gfn; > > + > > + rc = -EFAULT; > > + if ( copy_from_guest_offset(&gfn, xmar->gmfn_list, i, 1) ) > > + goto out; > > + > > + rc = set_foreign_p2m_entry(currd, gfn, _mfn(mfn_list[i])); > > + if ( rc ) > > + goto out; > > Perhaps partial success indication would be necessary here, so > the caller knows what to undo later. Or alternatively (just like > said for the compat wrapper) you may want/need to clean up > yourself. OK. > > > --- a/xen/include/public/memory.h > > +++ b/xen/include/public/memory.h > > @@ -599,6 +599,36 @@ struct xen_reserved_device_memory_map { > > typedef struct xen_reserved_device_memory_map > xen_reserved_device_memory_map_t; > > DEFINE_XEN_GUEST_HANDLE(xen_reserved_device_memory_map_t); > > > > +/* > > + * Get the pages for a particular guest resource, so that they can be > > + * mapped directly by a tools domain. > > + */ > > +#define XENMEM_acquire_resource 28 > > +struct xen_mem_acquire_resource { > > + /* IN - the domain whose resource is to be mapped */ > > + domid_t domid; > > + /* IN - the type of resource */ > > + uint16_t type; > > + /* > > + * IN - a type-specific resource identifier, which must be zero > > + * unless stated otherwise. > > + */ > > + uint32_t id; > > + /* IN - number of (4K) frames of the resource to be mapped */ > > + uint32_t nr_frames; > > + uint32_t pad; > > + /* IN - the index of the initial frame to be mapped */ > > + uint64_aligned_t frame; > > Does this really need to be 64 bits wide? > I'd prefer not to limit to 32 bits just in case we want to use this hypercall to map something with a large frame space. > > + /* IN/OUT - If the tools domain is PV then, upon return, gmfn_list > > + * will be populated with the MFNs of the resource. > > + * If the tools domain is HVM then it is expected that, on > > + * entry, gmfn_list will be populated with a list of GFNs > > + * that will be mapped to the MFNs of the resource. > > + */ > > + XEN_GUEST_HANDLE(xen_ulong_t) gmfn_list; > > Why not xen_pfn_t? I've debated this with myself a few times. I'm not convinced that something that could be an mfn or a gfn should have a xen_pfn_t type. Paul > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |