[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Device model operation hypercall (DMOP, re qemu depriv)
Jan Beulich writes ("Re: [Xen-devel] Device model operation hypercall (DMOP, re qemu depriv)"): > On 13.09.16 at 18:07, <david.vrabel@xxxxxxxxxx> wrote: > > This is an direct result of the requirement that the privcmd driver does > > not know the types of the sub-ops themselves. We can't have this > > requirement and type safety. Which do we want? > > Both, which the proposal utilizing side band information on VA > ranges allows for. (And in any event this to me clearly is an > aspect that would need to be mentioned in the disadvantages > section of the document.) The side band information approach has the problem that it is easy to accidentally write hypervisor code which misses the additionally-required range check. It might be possible to provide both strict type safety of the kind you're concerned about here, _and_ protection against missing access checks, but it's not very straightforward. One approach to do that would be to augment the guest handle array proposal with a typesafe macro system for accessing the guest handle slots. But I think George has a good argument as to why this is not needed: If most of the time the hypercalls are made by calling libxc functions, and the libxc functions have types as arguments, then the end caller has the same type safety. We'll have to be careful inside the individual libxc functions, but that should be fairly straightforward to do. So the places where we need to take extra care should be very localized. We do not expect DMOP callers to make the hypercalls directly. (They can't sensibly do so because the DMOP ABI is not stable - they need the assistance of the stability layer which we intend to provide in libxc.) So the actual hypercall call sites are all in-tree, in libxc. If the format of the struct used for one of these guest handle slots changes, the same struct definition from the Xen headers is used both in the hypervisor and in libxc, and any incompatibility will be detected. It's true that changes to the semantics of these slots (for example, a change of a slot from "array of struct X" to "one struct Y") will not be detected by the compiler. But *all* ABI changes to the DMOP interface need to be accompanied by changes to libxc to add compatibility code. I think it will be fairly straightforward to check, during review, that each DMOP change comes with a corresponding libxc change. But if you do not agree, then how about hiding the slots with a macro setup ? Something like: struct device_model_op { uint32_t op; union { struct op_foo foo; struct op_bar bar; /* etc... */ } u; }; struct op_foo { some stuff; #define DMOP_GUESTHANDLES_foo(O,ONE,MANY) \ ONE(O, struct op_foo_big_block, big_block) \ MANY(O, struct op_foo_data_array_element, data_array) }; DMOP_DEFINE(foo) Supported (preceded by) something like this (many toothpicks elided for clarity and ease of editing): /* dmop typesafe slot machinery: */ #define DMOP_DEFINE(opname) \ enum { DMOP_GUESTHANDLES_##opname(DMOP__SLOT_INDEX,DMOP__SLOT_INDEX) }; DMOP_GUESTHANDLES_##opname(DMOP__ACCESSORS_ONE,DMOP__ACCESSORS_MANY) #define DMOP__SLOT_INDEX(O,T,N) DMOP__SLOT_INDEX__##O##_##N, #define DMOP__ACCESSORS_ONE(O,T,N) \ static inline int copy_dm_buffer_from_guest_##O##_##N( T *dst, const struct device_model_op *dmop /* accesses [nr_]buffers */) { return copy_dm_buffer_from_guest__raw ((void*)dst, sizeof(*dst), dmop, DMOP__SLOT_INDEX__##O##_##N); } likewise copy_...to #define DMOP__ACCESSORS_MANY(O,T,N) \ static inline size_t get_dm_buffer_array_size_##O##_##N ... { calculation divides buffer size by type size; } static inline long copy_dm_buffer_from_guest_##O##_##N( T *dst, size_t nr_dst, const struct device_model_op *dmop /* accesses [nr_]buffers */) { Personally I don't think this is worth the effort. But if you do I would be happy to turn this into something which actually compiles :-). Thanks for your attention. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |