[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 1/8] public / x86: Introduce __HYPERCALL_dm_op...
>>> On 24.01.17 at 11:19, <Paul.Durrant@xxxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> Sent: 24 January 2017 10:00 >> >>> On 23.01.17 at 14:59, <paul.durrant@xxxxxxxxxx> wrote: >> > +static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[], >> > + unsigned int nr_bufs, void *dst, >> > + unsigned int idx, size_t dst_size) >> > +{ >> > + if ( dst_size != bufs[idx].size ) >> > + return false; >> > + >> > + return !copy_from_guest(dst, bufs[idx].h, dst_size); >> > +} >> > + >> > +static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[], >> > + unsigned int nr_bufs, unsigned int idx, >> > + void *src, size_t src_size) >> > +{ >> > + if ( bufs[idx].size != src_size ) >> > + return false; >> > + >> > + return !copy_to_guest(bufs[idx].h, src, bufs[idx].size); >> > +} >> >> What are the "nr_bufs" parameters good for in both of these? > > Good point. I'm not sure what happened to the range check. I think it would > still be good to have one. Which range check? You now validated nr_bufs quite a bit earlier, iirc. >> > +static int dm_op(domid_t domid, >> > + unsigned int nr_bufs, >> > + xen_dm_op_buf_t bufs[]) >> > +{ >> > + struct domain *d; >> > + struct xen_dm_op op; >> > + bool const_op = true; >> > + long rc; >> > + >> > + rc = rcu_lock_remote_domain_by_id(domid, &d); >> > + if ( rc ) >> > + return rc; >> > + >> > + if ( !has_hvm_container_domain(d) ) >> > + goto out; >> > + >> > + rc = xsm_dm_op(XSM_DM_PRIV, d); >> > + if ( rc ) >> > + goto out; >> > + >> > + if ( !copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op)) ) >> >> I'm afraid my request to have an exact size check in the copy >> functions was going a little too far for this particular instance: Just >> consider what would happen for a tool stack built with just this one >> patch in place, but run against a hypervisor with at least one more >> applied. > > I was wondering about that. I assumed that you were expecting the hypervisor > and libxc to be kept in lock-step. > >> We can of course keep things the way they are here, but >> then we'll need a placeholder added to the structure right away >> (like is e.g. the case for domctl/sysctl). Every sub-structure should >> then be checked to not violate that constraint by a BUILD_BUG_ON() >> in its respective function (I'd prefer that over a single verification >> of the entire structure/union, as that would more clearly pinpoint >> a possible offender). >> > > Can I not revert things to the min size check that we had before. Surely > that is preferable over the above complexity? I'm not sure I see much complexity there. Reverting to the min() has the down side that you need to fill the not copied part (an extra memset) to avoid acting on uninitialized data (which may result in an indirect information leak). Iirc you didn't have any precaution like this in that earlier variant. But if that aspect is taken care of, I'm not overly opposed to the alternative. >> > --- a/xen/include/xlat.lst >> > +++ b/xen/include/xlat.lst >> > @@ -56,6 +56,7 @@ >> > ? grant_entry_header grant_table.h >> > ? grant_entry_v2 grant_table.h >> > ? gnttab_swap_grant_ref grant_table.h >> > +! dm_op_buf hvm/dm_op.h >> > ? vcpu_hvm_context hvm/hvm_vcpu.h >> > ? vcpu_hvm_x86_32 hvm/hvm_vcpu.h >> > ? vcpu_hvm_x86_64 hvm/hvm_vcpu.h >> >> While for this patch the addition here is sufficient, subsequent >> patches should add their sub-structures here with a leading ?, >> and you'd then need to invoke the resulting CHECK_* macros >> somewhere. I don't think we should leave those structures >> unchecked. > > OK, I can add this if you think it is necessary. Yes please - we should have such stuff in place to aid in avoiding of mistakes; some of the earlier omitted padding would likely have been caught by such checks. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |